mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-26 18:17:44 +00:00
Small edits (#722)
This commit is contained in:
@@ -69,7 +69,7 @@ improving your results later on!
|
||||
While it's possible to track experiments with one tool, and pipeline them with another, having
|
||||
everything under the same roof has its benefits!
|
||||
|
||||
Being able to track experiment progress and compare experiments, and based on that send experiments to execution on remote
|
||||
Being able to track experiment progress and compare experiments, and, based on that, send experiments to execution on remote
|
||||
machines (that also build the environment themselves) has tremendous benefits in terms of visibility and ease of integration.
|
||||
|
||||
Being able to have visibility in your pipeline, while using experiments already defined in the platform,
|
||||
|
||||
@@ -55,7 +55,7 @@ required python packages, and execute and monitor the process.
|
||||
:::tip Agent Deployment Modes
|
||||
ClearML Agents can be deployed in Virtual Environment Mode or Docker Mode. In [virtual environment mode](../../clearml_agent.md#execution-environments),
|
||||
the agent creates a new venv to execute an experiment. In [Docker mode](../../clearml_agent.md#docker-mode),
|
||||
the agent executes an experiment inside a Docker container. See all running mode options [here](../../fundamentals/agents_and_queues.md#additional-features).
|
||||
the agent executes an experiment inside a Docker container. For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
|
||||
:::
|
||||
|
||||
## Clone an Experiment
|
||||
|
||||
@@ -8,9 +8,9 @@ Pipelines provide users with a greater level of abstraction and automation, with
|
||||
Tasks can interface with other Tasks in the pipeline and leverage other Tasks' work products.
|
||||
|
||||
The sections below describe the following scenarios:
|
||||
* Dataset creation
|
||||
* Data processing and consumption
|
||||
* Pipeline building
|
||||
* [Dataset creation](#dataset-creation)
|
||||
* Data [processing](#preprocessing-data) and [consumption](#training)
|
||||
* [Pipeline building](#building-the-pipeline)
|
||||
|
||||
|
||||
## Building Tasks
|
||||
@@ -46,7 +46,8 @@ dataset_folder = dataset.get_mutable_local_copy(
|
||||
|
||||
# create a new version of the dataset with the pickle file
|
||||
new_dataset = Dataset.create(
|
||||
dataset_project='data', dataset_name='dataset_v2',
|
||||
dataset_project='data',
|
||||
dataset_name='dataset_v2',
|
||||
parent_datasets=[dataset],
|
||||
use_current_task=True,
|
||||
# this will make sure we have the creation code and the actual dataset artifacts on the same Task
|
||||
|
||||
Reference in New Issue
Block a user