Small edits (#722)

This commit is contained in:
pollfly
2023-11-28 10:03:58 +02:00
committed by GitHub
parent 7afc79f5ce
commit 4b02af91f7
7 changed files with 17 additions and 17 deletions

View File

@@ -69,7 +69,7 @@ improving your results later on!
While it's possible to track experiments with one tool, and pipeline them with another, having
everything under the same roof has its benefits!
Being able to track experiment progress and compare experiments, and based on that send experiments to execution on remote
Being able to track experiment progress and compare experiments, and, based on that, send experiments to execution on remote
machines (that also build the environment themselves) has tremendous benefits in terms of visibility and ease of integration.
Being able to have visibility in your pipeline, while using experiments already defined in the platform,

View File

@@ -55,7 +55,7 @@ required python packages, and execute and monitor the process.
:::tip Agent Deployment Modes
ClearML Agents can be deployed in Virtual Environment Mode or Docker Mode. In [virtual environment mode](../../clearml_agent.md#execution-environments),
the agent creates a new venv to execute an experiment. In [Docker mode](../../clearml_agent.md#docker-mode),
the agent executes an experiment inside a Docker container. See all running mode options [here](../../fundamentals/agents_and_queues.md#additional-features).
the agent executes an experiment inside a Docker container. For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
:::
## Clone an Experiment

View File

@@ -8,9 +8,9 @@ Pipelines provide users with a greater level of abstraction and automation, with
Tasks can interface with other Tasks in the pipeline and leverage other Tasks' work products.
The sections below describe the following scenarios:
* Dataset creation
* Data processing and consumption
* Pipeline building
* [Dataset creation](#dataset-creation)
* Data [processing](#preprocessing-data) and [consumption](#training)
* [Pipeline building](#building-the-pipeline)
## Building Tasks
@@ -46,7 +46,8 @@ dataset_folder = dataset.get_mutable_local_copy(
# create a new version of the dataset with the pickle file
new_dataset = Dataset.create(
dataset_project='data', dataset_name='dataset_v2',
dataset_project='data',
dataset_name='dataset_v2',
parent_datasets=[dataset],
use_current_task=True,
# this will make sure we have the creation code and the actual dataset artifacts on the same Task