mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-26 18:17:44 +00:00
Small edits (#722)
This commit is contained in:
@@ -55,7 +55,7 @@ required python packages, and execute and monitor the process.
|
||||
:::tip Agent Deployment Modes
|
||||
ClearML Agents can be deployed in Virtual Environment Mode or Docker Mode. In [virtual environment mode](../../clearml_agent.md#execution-environments),
|
||||
the agent creates a new venv to execute an experiment. In [Docker mode](../../clearml_agent.md#docker-mode),
|
||||
the agent executes an experiment inside a Docker container. See all running mode options [here](../../fundamentals/agents_and_queues.md#additional-features).
|
||||
the agent executes an experiment inside a Docker container. For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
|
||||
:::
|
||||
|
||||
## Clone an Experiment
|
||||
|
||||
@@ -8,9 +8,9 @@ Pipelines provide users with a greater level of abstraction and automation, with
|
||||
Tasks can interface with other Tasks in the pipeline and leverage other Tasks' work products.
|
||||
|
||||
The sections below describe the following scenarios:
|
||||
* Dataset creation
|
||||
* Data processing and consumption
|
||||
* Pipeline building
|
||||
* [Dataset creation](#dataset-creation)
|
||||
* Data [processing](#preprocessing-data) and [consumption](#training)
|
||||
* [Pipeline building](#building-the-pipeline)
|
||||
|
||||
|
||||
## Building Tasks
|
||||
@@ -46,7 +46,8 @@ dataset_folder = dataset.get_mutable_local_copy(
|
||||
|
||||
# create a new version of the dataset with the pickle file
|
||||
new_dataset = Dataset.create(
|
||||
dataset_project='data', dataset_name='dataset_v2',
|
||||
dataset_project='data',
|
||||
dataset_name='dataset_v2',
|
||||
parent_datasets=[dataset],
|
||||
use_current_task=True,
|
||||
# this will make sure we have the creation code and the actual dataset artifacts on the same Task
|
||||
|
||||
Reference in New Issue
Block a user