mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-26 18:17:44 +00:00
Small edits (#724)
This commit is contained in:
@@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* AutoKeras model files
|
||||
|
||||
@@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* CatBoost model files
|
||||
@@ -115,6 +115,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
@@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* `fastai` model files
|
||||
|
||||
@@ -41,10 +41,10 @@ The agent executes the code with the modifications you made in the UI, even over
|
||||
|
||||
Clone your experiment, then modify your Hydra parameters via the UI in one of the following ways:
|
||||
* Modify the OmegaConf directly:
|
||||
1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True`
|
||||
1. In the experiment’s **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values
|
||||
1. In the experiment's **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True`
|
||||
1. In the experiment's **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values
|
||||
* Add an experiment hyperparameter:
|
||||
1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set
|
||||
1. In the experiment's **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set
|
||||
to `False`
|
||||
1. In the same section, click `Edit`, which gives you the option to add parameters. Input parameters from the OmegaConf
|
||||
that you want to modify using dot notation. For example, if your OmegaConf looks like this:
|
||||
|
||||
@@ -8,7 +8,7 @@ instructions.
|
||||
:::
|
||||
|
||||
[PyTorch Ignite](https://pytorch.org/ignite/index.html) is a library for training and evaluating neural networks in
|
||||
PyTorch. You can integrate ClearML into your code using Ignite’s built-in loggers: [TensorboardLogger](#tensorboardlogger)
|
||||
PyTorch. You can integrate ClearML into your code using Ignite's built-in loggers: [TensorboardLogger](#tensorboardlogger)
|
||||
and [ClearMLLogger](#clearmllogger).
|
||||
|
||||
## TensorboardLogger
|
||||
@@ -92,7 +92,7 @@ Integrate ClearML with the following steps:
|
||||
# Attach the logger to the trainer to log model's weights as a histogram
|
||||
clearml_logger.attach(trainer, log_handler=WeightsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100))
|
||||
|
||||
# Attach the logger to the trainer to log model’s gradients as scalars
|
||||
# Attach the logger to the trainer to log model's gradients as scalars
|
||||
clearml_logger.attach(
|
||||
trainer, log_handler=GradsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100)
|
||||
)
|
||||
|
||||
@@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* Keras models
|
||||
@@ -77,7 +77,7 @@ See [Explicit Reporting Tutorial](../guides/reporting/explicit_reporting.md).
|
||||
|
||||
## Examples
|
||||
|
||||
Take a look at ClearML’s Keras examples. The examples use Keras and ClearML in different configurations with
|
||||
Take a look at ClearML's Keras examples. The examples use Keras and ClearML in different configurations with
|
||||
additional tools like TensorBoard and Matplotlib:
|
||||
* [Keras with Tensorboard](../guides/frameworks/keras/keras_tensorboard.md) - Demonstrates ClearML logging a Keras model,
|
||||
and plots and scalars logged to TensorBoard
|
||||
@@ -127,6 +127,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
@@ -36,7 +36,7 @@ Integrate ClearML into your Keras Tuner optimization script by doing the followi
|
||||
)
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Output Keras model
|
||||
* Optimization trial scalars - scalar plot showing metrics for all runs
|
||||
* Hyperparameter optimization summary plot - Tabular summary of hyperparameters tested and their metrics by trial ID
|
||||
|
||||
@@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* LightGBM model files
|
||||
@@ -116,6 +116,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
@@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* MegEngine model files
|
||||
@@ -112,6 +112,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
@@ -66,7 +66,7 @@ change the task's name or project, use the `task_name` and `project_name` parame
|
||||
The task captures the images logged by the image handler, metrics logged with the stats handler, as well as source code,
|
||||
uncommitted changes, installed packages, console output, and more.
|
||||
|
||||
You can see all the captured data in the task’s page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
View the logged images in the WebApp, in the experiment's **Debug Samples** tab.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ title: Optuna
|
||||
which makes use of different samplers such as grid search, random, bayesian, and evolutionary algorithms. You can integrate
|
||||
Optuna into ClearML's automated hyperparameter optimization.
|
||||
|
||||
The [HyperParameterOptimizer](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class contains ClearML’s
|
||||
The [HyperParameterOptimizer](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class contains ClearML's
|
||||
hyperparameter optimization modules. Its modular design enables using different optimizers, including existing software
|
||||
frameworks, like Optuna, enabling simple,
|
||||
accurate, and fast hyperparameter optimization. The Optuna ([`automation.optuna.OptimizerOptuna`](../references/sdk/hpo_optuna_optuna_optimizeroptuna.md)),
|
||||
|
||||
@@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* PyTorch models
|
||||
@@ -86,7 +86,7 @@ Take a look at ClearML's PyTorch examples. The examples use PyTorch and ClearML
|
||||
additional tools, like argparse, TensorBoard, and matplotlib:
|
||||
|
||||
* [PyTorch MNIST](../guides/frameworks/pytorch/pytorch_mnist.md) - Demonstrates ClearML automatically logging models created with PyTorch, and `argparse` command line parameters
|
||||
* [PyTorch with Matplotlib](../guides/frameworks/pytorch/pytorch_matplotlib.md) - Demonstrates ClearML’s automatic logging PyTorch models and matplotlib images. The images are stored in the resulting ClearML experiment's **Debug Samples**
|
||||
* [PyTorch with Matplotlib](../guides/frameworks/pytorch/pytorch_matplotlib.md) - Demonstrates ClearML's automatic logging PyTorch models and matplotlib images. The images are stored in the resulting ClearML experiment's **Debug Samples**
|
||||
* [PyTorch with TensorBoard](../guides/frameworks/pytorch/pytorch_tensorboard.md) - Demonstrates ClearML automatically logging PyTorch models, and scalars, debug samples, and text logged using TensorBoard's `SummaryWriter`
|
||||
* [PyTorch TensorBoard Toy](../guides/frameworks/pytorch/tensorboard_toy_pytorch.md) - Demonstrates ClearML automatically logging debug samples logged using TensorBoard's `SummaryWriter`
|
||||
* [PyTorch TensorBoardX](../guides/frameworks/pytorch/pytorch_tensorboardx.md) - Demonstrates ClearML automatically logging PyTorch models, and scalars, debug samples, and text logged using TensorBoardX's `SummaryWriter`
|
||||
|
||||
@@ -18,7 +18,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* PyTorch Models
|
||||
@@ -43,8 +43,7 @@ To control a task's framework logging, use the `auto_connect_frameworks` paramet
|
||||
Completely disable all automatic logging by setting the parameter to `False`. For finer grained control of logged
|
||||
frameworks, input a dictionary, with framework-boolean pairs.
|
||||
|
||||
For example, the following code will log PyTorch models, but will not log any information reported to TensorBoard.
|
||||
:
|
||||
For example, the following code will log PyTorch models, but will not log any information reported to TensorBoard:
|
||||
|
||||
```python
|
||||
auto_connect_frameworks={
|
||||
@@ -143,7 +142,7 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* Joblib model files
|
||||
|
||||
@@ -56,7 +56,7 @@ See more information about explicitly logging information to a ClearML Task:
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
### Examples
|
||||
Take a look at ClearML’s TensorBoard examples:
|
||||
Take a look at ClearML's TensorBoard examples:
|
||||
* [TensorBoard PR Curve](../guides/frameworks/tensorflow/tensorboard_pr_curve.md) - Demonstrates logging TensorBoard outputs and TensorFlow flags
|
||||
* [TensorBoard Toy](../guides/frameworks/tensorflow/tensorboard_toy.md) - Demonstrates logging TensorBoard histograms, scalars, images, text, and TensorFlow flags
|
||||
* [Tensorboard with PyTorch](../guides/frameworks/pytorch/pytorch_tensorboard.md) - Demonstrates logging TensorBoard scalars, debug samples, and text integrated in code that uses PyTorch
|
||||
@@ -56,7 +56,7 @@ See more information about explicitly logging information to a ClearML Task:
|
||||
|
||||
### Examples
|
||||
|
||||
Take a look at ClearML’s TensorboardX examples:
|
||||
Take a look at ClearML's TensorboardX examples:
|
||||
|
||||
* [TensorboardX with PyTorch](../guides/frameworks/tensorboardx/tensorboardx.md) - Demonstrates ClearML logging TensorboardX scalars, debug
|
||||
samples, and text in code using PyTorch
|
||||
|
||||
@@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* TensorFlow definitions
|
||||
@@ -75,17 +75,17 @@ See [Explicit Reporting Tutorial](../guides/reporting/explicit_reporting.md).
|
||||
|
||||
## Examples
|
||||
|
||||
Take a look at ClearML’s TensorFlow examples. The examples use TensorFlow and ClearML in different configurations with
|
||||
Take a look at ClearML's TensorFlow examples. The examples use TensorFlow and ClearML in different configurations with
|
||||
additional tools, like Abseil and TensorBoard:
|
||||
|
||||
* [TensorFlow MNIST](../guides/frameworks/tensorflow/tensorflow_mnist.md) - Demonstrates ClearML's automatic logging of
|
||||
model checkpoints, TensorFlow definitions, and scalars logged using TensorFlow methods
|
||||
* [TensorBoard PR Curve](../guides/frameworks/tensorflow/tensorboard_pr_curve.md) - Demonstrates ClearML’s automatic
|
||||
* [TensorBoard PR Curve](../guides/frameworks/tensorflow/tensorboard_pr_curve.md) - Demonstrates ClearML's automatic
|
||||
logging of TensorBoard output and TensorFlow definitions.
|
||||
* [TensorBoard Toy](../guides/frameworks/tensorflow/tensorboard_toy.md) - Demonstrates ClearML’s automatic logging of
|
||||
* [TensorBoard Toy](../guides/frameworks/tensorflow/tensorboard_toy.md) - Demonstrates ClearML's automatic logging of
|
||||
TensorBoard scalars, histograms, images, and text, as well as all console output and TensorFlow Definitions.
|
||||
* [Absl flags](https://github.com/allegroai/clearml/blob/master/examples/frameworks/tensorflow/absl_flags.py) - Demonstrates
|
||||
ClearML’s automatic logging of parameters defined using `absl.flags`
|
||||
ClearML's automatic logging of parameters defined using `absl.flags`
|
||||
|
||||
## Remote Execution
|
||||
ClearML logs all the information required to reproduce an experiment on a different machine (installed packages,
|
||||
@@ -129,6 +129,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
@@ -25,7 +25,7 @@ All you have to do is install and set up ClearML:
|
||||
clearml-init
|
||||
```
|
||||
|
||||
That’s it! In every training run from now on, the ClearML experiment
|
||||
That's it! In every training run from now on, the ClearML experiment
|
||||
manager will capture:
|
||||
* Source code and uncommitted changes
|
||||
* Hyperparameters - PyTorch trainer [parameters](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/trainer#transformers.TrainingArguments)
|
||||
@@ -38,7 +38,7 @@ and TensorFlow definitions
|
||||
* And more
|
||||
|
||||
All of this is captured into a [ClearML Task](../fundamentals/task.md). By default, a task called `Trainer` is created
|
||||
in the `HuggingFace Transformers` project. To change the task’s name or project, use the `CLEARML_PROJECT` and `CLEARML_TASK`
|
||||
in the `HuggingFace Transformers` project. To change the task's name or project, use the `CLEARML_PROJECT` and `CLEARML_TASK`
|
||||
environment variables
|
||||
|
||||
:::tip project names
|
||||
@@ -48,7 +48,7 @@ task within the `example` project.
|
||||
|
||||
In order to log the models created during training, set the `CLEARML_LOG_MODEL` environment variable to `True`.
|
||||
|
||||
You can see all the captured data in the task’s page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
|
||||

|
||||
|
||||
@@ -79,7 +79,7 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||

|
||||
|
||||
Use ClearML’s web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
* Clone the experiment
|
||||
* Edit the hyperparameters and/or other details
|
||||
@@ -88,6 +88,6 @@ with the new configuration on a remote machine:
|
||||
The ClearML Agent executing the task will use the new values to [override any hard coded values](../clearml_agent.md).
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
@@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* XGBoost model files
|
||||
@@ -143,6 +143,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
@@ -27,7 +27,7 @@ built in logger:
|
||||
clearml-init
|
||||
```
|
||||
|
||||
That’s it! Now, whenever you train a model using YOLOv5, the run will be captured and tracked by ClearML – no additional
|
||||
That's it! Now, whenever you train a model using YOLOv5, the run will be captured and tracked by ClearML – no additional
|
||||
code necessary.
|
||||
|
||||
## Training YOLOv5 with ClearML
|
||||
@@ -54,7 +54,7 @@ manager will capture:
|
||||
* And more
|
||||
|
||||
All of this is captured into a [ClearML Task](../fundamentals/task.md). By default, a task called `Training` is created
|
||||
in the `YOLOv5` project. To change the task’s name or project, use the `--project` and `--name` arguments when running
|
||||
in the `YOLOv5` project. To change the task's name or project, use the `--project` and `--name` arguments when running
|
||||
the `train.py` script.
|
||||
|
||||
```commandline
|
||||
@@ -66,7 +66,7 @@ ClearML uses `/` as a delimiter for subprojects: using `example/sample` as a nam
|
||||
task within the `example` project.
|
||||
:::
|
||||
|
||||
You can see all the captured data in the task’s page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
Additionally, you can view all of your YOLOv5 runs tracked by ClearML in the [Experiments Table](../webapp/webapp_model_table.md).
|
||||
Add custom columns to the table, such as mAP values, so you can easily sort and see what is the best performing model.
|
||||
You can also select multiple experiments and directly [compare](../webapp/webapp_exp_comparing.md) them.
|
||||
@@ -94,7 +94,7 @@ dataset using the link in the yaml file or the scripts provided by YOLOv5, you g
|
||||
```
|
||||
|
||||
You can use any dataset, as long as you maintain this folder structure.
|
||||
Copy the dataset’s corresponding yaml file to the root of the dataset folder.
|
||||
Copy the dataset's corresponding yaml file to the root of the dataset folder.
|
||||
|
||||
```
|
||||
..
|
||||
@@ -171,7 +171,7 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||

|
||||
|
||||
Use ClearML’s web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
* Clone the experiment
|
||||
* Edit the hyperparameters and/or other details
|
||||
@@ -200,7 +200,7 @@ if RANK in {-1, 0}:
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models.
|
||||
|
||||
To run hyperparameter optimization locally, you can use the [template script](https://github.com/ultralytics/yolov5/blob/master/utils/loggers/clearml/hpo.py)
|
||||
|
||||
@@ -38,7 +38,7 @@ segmentation, and classification. Get the most out of YOLOv8 with ClearML:
|
||||
clearml-init
|
||||
```
|
||||
|
||||
That’s it! Now, whenever you train a model using YOLOv8, the run will be captured and tracked by ClearML – no additional
|
||||
That's it! Now, whenever you train a model using YOLOv8, the run will be captured and tracked by ClearML – no additional
|
||||
code necessary.
|
||||
|
||||
## Training YOLOv8 with ClearML
|
||||
@@ -64,7 +64,7 @@ manager will capture:
|
||||
* And more
|
||||
|
||||
All of this is captured into a [ClearML Task](../fundamentals/task.md): a task with your training script's name
|
||||
created in a `YOLOv8` ClearML project. To change the task’s name or project, pass the `name` and `project` arguments in one of
|
||||
created in a `YOLOv8` ClearML project. To change the task's name or project, pass the `name` and `project` arguments in one of
|
||||
the following ways:
|
||||
* Via the SDK:
|
||||
|
||||
@@ -89,7 +89,7 @@ ClearML uses `/` as a delimiter for subprojects: using `example/sample` as a nam
|
||||
task within the `example` project.
|
||||
:::
|
||||
|
||||
You can see all the captured data in the task’s page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
Additionally, you can view all of your YOLOv8 runs tracked by ClearML in the [Experiments Table](../webapp/webapp_model_table.md).
|
||||
Add custom columns to the table, such as mAP values, so you can easily sort and see what is the best performing model.
|
||||
You can also select multiple experiments and directly [compare](../webapp/webapp_exp_comparing.md) them.
|
||||
@@ -115,7 +115,7 @@ shuts down instances as needed, according to a resource budget that you set.
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||
ClearML logs all the information required to reproduce an experiment, but you may also want to change a few parameters
|
||||
and task details when you re-run an experiment, which you can do through ClearML’s UI.
|
||||
and task details when you re-run an experiment, which you can do through ClearML's UI.
|
||||
|
||||
In order to be able to override parameters via the UI,
|
||||
you have to run your code to [create a ClearML Task](../clearml_sdk/task_sdk.md#task-creation), which will log all the
|
||||
|
||||
Reference in New Issue
Block a user