Small edits (#636)

This commit is contained in:
pollfly 2023-08-09 13:28:25 +03:00 committed by GitHub
parent c0ad27a48b
commit bdcf043fe5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
39 changed files with 73 additions and 74 deletions

View File

@ -12,7 +12,7 @@ to a remote machine.
![ClearML Agent flow diagram](img/clearml_agent_flow_diagram.png)
The diagram above demonstrates a typical flow where an agent executes a task:
The preceding diagram demonstrates a typical flow where an agent executes a task:
1. Enqueue a task for execution on the queue.
1. The agent pulls the task from the queue.
@ -288,7 +288,7 @@ There are two options for deploying the ClearML Agent to a Kubernetes cluster:
* Spin ClearML Agent as a long-lasting service pod
* Map ClearML jobs directly to K8s jobs with Kubernetes Glue (available in the ClearML Enterprise plan)
See more details [here](https://github.com/allegroai/clearml-agent#kubernetes-integration-optional).
For more details, see [Kubernetes integration](https://github.com/allegroai/clearml-agent#kubernetes-integration-optional).
### Explicit Task Execution

View File

@ -45,7 +45,7 @@ ClearML Data supports two interfaces:
- `clearml-data` - A CLI utility for creating, uploading, and managing datasets. See [CLI](clearml_data_cli.md) for a reference of `clearml-data` commands.
- `clearml.Dataset` - A python interface for creating, retrieving, managing, and using datasets. See [SDK](clearml_data_sdk.md) for an overview of the basic methods of the `Dataset` module.
For an overview of our recommendations for ClearML Data workflows and practices, see [Best Practices](best_practices.md).
For an overview of recommendations for ClearML Data workflows and practices, see [Best Practices](best_practices.md).
## Dataset Version States
The following table displays the possible states for a dataset version.

View File

@ -75,7 +75,7 @@ To improve deep dataset DAG storage and speed, dataset squashing was introduced.
class method generates a new dataset by squashing a set of dataset versions, and merging down all changes introduced in
their lineage DAG, creating a new, flat, independent version.
The datasets being squashed into a single dataset can be specified by their IDs or by project & name pairs.
The datasets being squashed into a single dataset can be specified by their IDs or by project and name pairs.
```python
# option 1 - list dataset IDs

View File

@ -94,7 +94,7 @@ dataset_path = Dataset.get(
).get_local_copy()
```
The script above gets the dataset and uses the [`Dataset.get_local_copy`](../../references/sdk/dataset.md#get_local_copy)
The preceding script gets the dataset and uses the [`Dataset.get_local_copy`](../../references/sdk/dataset.md#get_local_copy)
method to return a path to the cached, read-only local dataset.
If you need a modifiable copy of the dataset, use the following code:

View File

@ -197,7 +197,7 @@ These methods can be used on `Model`, `InputModel`, and/or `OutputModel` objects
* Table - [`report_table`](../references/sdk/model_outputmodel.md#report_table)
* Line plot - [`report_line_plot`](../references/sdk/model_outputmodel.md#report_line_plot)
* Scatter plot - [`report_scatter2d`](../references/sdk/model_outputmodel.md#report_scatter2d)
* Confusion matrix (heat map) - [`report_confusion_matrix`](../references/sdk/model_outputmodel.md#report_confusion_matrix) & [`report_matrix`](../references/sdk/model_outputmodel.md#report_matrix)
* Confusion matrix (heat map) - [`report_confusion_matrix`](../references/sdk/model_outputmodel.md#report_confusion_matrix) and [`report_matrix`](../references/sdk/model_outputmodel.md#report_matrix)
* 3d plots
* Scatter plot - [`report_scatter3d`](../references/sdk/model_outputmodel.md#report_scatter3d)
* Surface plot - [`report_surface`](../references/sdk/model_outputmodel.md#report_surface)

View File

@ -11,7 +11,7 @@ populate it with:
* A link to the running git repository (including commit ID and local uncommitted changes)
* Python packages used (i.e. directly imported Python packages, and the versions available on the machine)
* Argparse arguments (default and specific to the current execution)
* Reports to Tensorboard & Matplotlib and model checkpoints.
* Reports to Tensorboard and Matplotlib and model checkpoints.
:::tip Ensuring Reproducibility
To ensure every run will provide the same results, ClearML controls the deterministic behaviors of the `tensorflow`,
@ -340,7 +340,7 @@ The default operator for a query is `or`, unless `and` is placed at the beginnin
)
```
## Cloning & Executing Tasks
## Cloning and Executing Tasks
Once a task object is created, it can be copied (cloned). [`Task.clone()`](../references/sdk/task.md#taskclone) returns
a copy of the original task (`source_task`). By default, the cloned task is added to the same project as the original,
@ -714,7 +714,7 @@ local_weights_path = last_snapshot.get_local_copy()
Notice that if one of the frameworks loads an existing weights file, the running task will automatically update its
"Input Model", pointing directly to the original training task's model. This makes it easy to get the full lineage of
every trained and used model in our system!
every trained and used model in your system!
Models loaded by the ML framework appear in an experiment's **Artifacts** tab under the "Input Models" section in the ClearML UI.

View File

@ -9,7 +9,7 @@ solution.
## Features
* Easy to deploy & configure
* Easy to deploy and configure
* Support Machine Learning Models (Scikit Learn, XGBoost, LightGBM)
* Support Deep Learning Models (TensorFlow, PyTorch, ONNX)
* Customizable RestAPI for serving (i.e. allow per model pre/post-processing for easy integration)
@ -25,7 +25,7 @@ solution.
* Out-of-the-box node autoscaling based on load/usage
* Efficient
* Multi-container resource utilization
* Support for CPU & GPU nodes
* Support for CPU and GPU nodes
* Auto-batching for DL models
* [Automatic deployment](clearml_serving_tutorial.md#automatic-model-deployment)
* Automatic model upgrades w/ canary support
@ -52,7 +52,7 @@ solution.
* **Serving Engine Services** - Inference engine containers (e.g. Nvidia Triton, TorchServe etc.) used by the Inference
Services for heavier model inference.
* **Statistics Service** - Single instance per Serving Service collecting and broadcasting model serving & performance
* **Statistics Service** - Single instance per Serving Service collecting and broadcasting model serving and performance
statistics
* **Time-series DB** - Statistics collection service used by the Statistics Service, e.g. Prometheus

View File

@ -21,7 +21,7 @@ clearml-serving [-h] [--debug] [--yes] [--id ID] {list,create,metrics,config,mod
|Name|Description|Optional|
|---|---|---|
|`--id`|Serving Service (Control plane) Task ID to configure (if not provided automatically detect the running control plane Task) | <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" /> |
|`--id`|Serving Service (Control plane) Task ID to configure (if not provided, automatically detect the running control plane Task) | <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" /> |
|`--debug` | Print debug messages | <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|`--yes` |Always answer YES on interactive inputs| <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |

View File

@ -8,7 +8,7 @@ The following page goes over how to set up and upgrade `clearml-serving`.
* ClearML-Server : Model repository, Service Health, Control plane
* Kubernetes / Single-instance Machine : Deploying containers
* CLI : Configuration & model deployment interface
* CLI : Configuration and model deployment interface
## Initial Setup
1. Set up your [ClearML Server](../deploying_clearml/clearml_server.md) or use the

View File

@ -26,7 +26,7 @@ Train a model. Work from your local `clearml-serving` repository's root.
`python3 examples/sklearn/train_model.py`.
During execution, ClearML automatically registers the sklearn model and uploads it into the model repository.
For Manual model registration see [here](#registering--deploying-new-models-manually)
For Manual model registration see [here](#registering-and-deploying-new-models-manually)
### Step 2: Register Model
@ -79,7 +79,7 @@ Inference services status, console outputs and machine metrics are available in
project (default: "DevOps" project)
:::
## Registering & Deploying New Models Manually
## Registering and Deploying New Models Manually
Uploading an existing model file into the model repository can be done via the `clearml` RestAPI, the python interface,
or with the `clearml-serving` CLI.
@ -196,7 +196,7 @@ ClearML serving instances send serving statistics (count/latency) automatically
to visualize and create live dashboards.
The default docker-compose installation is preconfigured with Prometheus and Grafana. Notice that by default data/ate
of both containers is *not* persistent. To add persistence, we recommend adding a volume mount.
of both containers is *not* persistent. To add persistence, adding a volume mount is recommended.
You can also add many custom metrics on the input/predictions of your models. Once a model endpoint is registered,
adding custom metrics can be done using the CLI.

View File

@ -21,7 +21,7 @@ compare results.
![Hyperparameter optimization diagram](../img/hpo_diagram.png)
The diagram above demonstrates the typical flow of hyperparameter optimization where the parameters of a base task are optimized:
The preceding diagram demonstrates the typical flow of hyperparameter optimization where the parameters of a base task are optimized:
1. Configure an Optimization Task with a base task whose parameters will be optimized, and a set of parameter values to
test

View File

@ -21,7 +21,7 @@ Tasks are grouped into a [project](projects.md) hierarchical structure, similar
how to group tasks, though different models or objectives are usually grouped into different projects.
Tasks can be accessed and utilized with code. [Access a task](../clearml_sdk/task_sdk.md#accessing-tasks) by
specifying project name & task name combination or by a unique ID.
specifying project name and task name combination or by a unique ID.
It's possible to create copies of a task ([clone](../webapp/webapp_exp_reproducing.md)) then execute them with
[ClearML Agent](../clearml_agent.md). When an agent executes a task, it uses the specified configuration to:

View File

@ -24,7 +24,7 @@ During early stages of model development, while code is still being modified hea
The abovementioned setups might be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome!
The goal of this phase is to get a code, dataset, and environment setup, so you can start digging to find the best model!
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out our [getting started](ds_first_steps.md)).
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [Getting Started](ds_first_steps.md)).
This helps visualizing the results and tracking progress.
- [ClearML Agent](../../clearml_agent.md) helps moving your work to other machines without the hassle of rebuilding the environment every time,
while also creating an easy queue interface that easily lets you just drop your experiments to be executed one by one

View File

@ -133,6 +133,6 @@ Sit back, relax, and watch your models converge :) or continue to see what else
## YouTube Playlist
Or watch the Getting Started Playlist on our YouTube Channel!
Or watch the Getting Started Playlist on ClearML's YouTube Channel!
[![Watch the video](https://img.youtube.com/vi/bjWwZAzDxTY/hqdefault.jpg)](https://www.youtube.com/watch?v=bjWwZAzDxTY&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=2)

View File

@ -4,7 +4,7 @@ title: Next Steps
So, you've already [installed ClearML's python package](ds_first_steps.md) and run your first experiment!
Now, we'll learn how to track Hyperparameters, Artifacts and Metrics!
Now, you'll learn how to track Hyperparameters, Artifacts and Metrics!
## Accessing Experiments
@ -13,7 +13,7 @@ A Task has a project and a name, both can be changed after the experiment has be
A Task is also automatically assigned an auto-generated unique identifier (UUID string) that cannot be changed and always locates the same Task in the system.
It's possible to retrieve a Task object programmatically by querying the system based on either the Task ID,
or project & name combination. It's also possible to query tasks based on their properties, like Tags.
or project and name combination. It's also possible to query tasks based on their properties, like Tags.
```python
prev_task = Task.get_task(task_id='123456deadbeef')
@ -62,7 +62,7 @@ task.upload_artifact('/path/to/folder/', name='folder')
```
Lastly, you can upload an instance of an object; Numpy/Pandas/PIL Images are supported with npz/csv.gz/jpg formats accordingly.
If the object type is unknown ClearML pickles it and uploads the pickle file.
If the object type is unknown, ClearML pickles it and uploads the pickle file.
```python
numpy_object = np.eye(100, 100)
@ -74,8 +74,8 @@ Check out all [artifact logging](../../clearml_sdk/task_sdk.md#artifacts) option
### Using Artifacts
Logged artifacts can be used by other Tasks, whether it's a pre-trained Model or processed data.
To use an artifact, first we have to get an instance of the Task that originally created it,
then we either download it and get its path, or get the artifact object directly.
To use an artifact, first you have to get an instance of the Task that originally created it,
then you either download it and get its path, or get the artifact object directly.
For example, using a previously generated preprocessed data.
@ -85,7 +85,7 @@ local_csv = preprocess_task.artifacts['data'].get_local_copy()
```
`task.artifacts` is a dictionary where the keys are the artifact names, and the returned object is the artifact object.
Calling `get_local_copy()` returns a local cached copy of the artifact. Therefore, next time we execute the code, we don't
Calling `get_local_copy()` returns a local cached copy of the artifact. Therefore, next time you execute the code, you don't
need to download the artifact again.
Calling `get()` gets a deserialized pickled object.
@ -95,8 +95,8 @@ Check out the [artifacts retrieval](https://github.com/allegroai/clearml/blob/ma
Models are a special kind of artifact.
Models created by popular frameworks (such as PyTorch, TensorFlow, Scikit-learn) are automatically logged by ClearML.
All snapshots are automatically logged. In order to make sure we also automatically upload the model snapshot (instead of saving its local path),
we need to pass a storage location for the model files to be uploaded to.
All snapshots are automatically logged. In order to make sure you also automatically upload the model snapshot (instead of saving its local path),
pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucket:
```python
@ -126,18 +126,18 @@ last_snapshot = prev_task.models['output'][-1]
local_weights_path = last_snapshot.get_local_copy()
```
Like before we have to get the instance of the Task training the original weights files, then we can query the task for its output models (a list of snapshots), and get the latest snapshot.
Like before, you have to get the instance of the task training the original weights files, then you can query the task for its output models (a list of snapshots), and get the latest snapshot.
:::note
Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
:::
As with artifacts, all models are cached, meaning the next time we run this code, no model needs to be downloaded.
Once one of the frameworks will load the weights file, the running Task will be automatically updated with “Input Model” pointing directly to the original training Tasks Model.
As with artifacts, all models are cached, meaning the next time you run this code, no model needs to be downloaded.
Once one of the frameworks will load the weights file, the running task will be automatically updated with “Input Model” pointing directly to the original training Tasks Model.
This feature lets you easily get a full genealogy of every trained and used model by your system!
## Log Metrics
Full metrics logging is the key to finding the best performing model!
By default, everything that's reported to Tensorboard & Matplotlib is automatically captured and logged.
By default, everything that's reported to Tensorboard and Matplotlib is automatically captured and logged.
Since not all metrics are tracked that way, it's also possible to manually report metrics using the `logger` object.
@ -171,7 +171,7 @@ Later you can search based on task name and tag in the search bar, and filter ex
## What's Next?
This covers the Basics of ClearML! Running through this guide we've learned how to log Parameters, Artifacts and Metrics!
This covers the Basics of ClearML! Running through this guide you've learned how to log Parameters, Artifacts and Metrics!
If you want to learn more look at how we see the data science process in our [best practices](best_practices.md) page,
or check these pages out:
@ -180,12 +180,12 @@ or check these pages out:
- Develop on remote machines with [ClearML Session](../../apps/clearml_session.md)
- Structure your work and put it into [Pipelines](../../pipelines/pipelines.md)
- Improve your experiments with [Hyperparameter Optimization](../../fundamentals/hpo.md)
- Check out ClearML's integrations with your favorite ML frameworks like [TensorFlow](../../guides/frameworks/tensorflow/tensorflow_mnist.md),
- Check out ClearML's integrations with your favorite ML frameworks like [TensorFlow](../../integrations/tensorflow.md),
[PyTorch](../../guides/frameworks/pytorch/pytorch_mnist.md), [Keras](../../guides/frameworks/keras/keras_tensorboard.md),
and more
## YouTube Playlist
All these tips and tricks are also covered by our YouTube Getting Started series, go check it out :)
All these tips and tricks are also covered in ClearML's **Getting Started** series on YouTube, go check it out :)
[![Watch the video](https://img.youtube.com/vi/kyOfwVg05EM/hqdefault.jpg)](https://www.youtube.com/watch?v=kyOfwVg05EM&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=3)

View File

@ -11,7 +11,7 @@ If you are afraid of clutter, use the archive option, and set up your own [clean
- Track the code base. There is no reason not to add metrics to any process in your workflow, even if it is not directly ML. Visibility is key to iterative improvement of your code / workflow.
- Create per-project [leaderboards](../../guides/ui/building_leader_board.md) based on custom columns
(hyperparameters and performance accuracy), and bookmark them (full URL will always reproduce the same view & table).
(hyperparameters and performance accuracy), and bookmark them (full URL will always reproduce the same view and table).
- Share experiments with your colleagues and team-leaders.
Invite more people to see how your project is progressing, and suggest they add metric reporting for their own.
These metrics can later be part of your own in-house monitoring solution, don't let good data go to waste :)

View File

@ -8,7 +8,7 @@ but sometimes, when using a Docker container, a user may need to use additional,
## Tutorial
In this tutorial, we will learn how to use `extra_docker_shell_script`, with which we will reconfigure an Agent to execute
In this tutorial, you will learn how to use `extra_docker_shell_script` to reconfigure an Agent to execute
a shell script when a docker is started, but before an experiment is run.
## Prerequisites
@ -23,8 +23,8 @@ a shell script when a docker is started, but before an experiment is run.
* Mac - `$HOME/clearml.conf`
* Windows - `\User\<username>\clearml.conf`
1. In the file, search for and go to, `extra_docker_shell_script:`, which is where we will be putting our extra script. If
it is commented out, make sure to uncomment the line. We will use the example script that is already there `["apt-get install -y bindfs", ]`.
1. In the file, go to, `extra_docker_shell_script:`, which is where you will put an extra script. If
it is commented out, make sure to uncomment the line. Use the example script that is already there `["apt-get install -y bindfs", ]`.
1. Search for and go to `docker_force_pull` in the document, and make sure that it is set to `true`, so that your docker
image will be updated.
@ -34,7 +34,7 @@ it is commented out, make sure to uncomment the line. We will use the example sc
1. Enqueue any ClearML Task to the `default` queue, which the Agent is now listening to. The Agent pulls the Task, and then reproduces it,
and now it will execute the `extra_docker_shell_script` that was put in the configuration file. Then the code will be
executed in the updated docker container. If we look at the console output in the web UI, the third entry should start
executed in the updated docker container. If you look at the console output in the web UI, the third entry should start
with `Executing: ['docker', 'run', '-t', '--gpus...'`, and towards the end of the entry, where the downloaded packages are
mentioned, we can see the additional shell-script `apt-get install -y bindfs`.
mentioned, you can see the additional shell-script `apt-get install -y bindfs`.

View File

@ -2,7 +2,7 @@
title: Audio Classification - Jupyter Notebooks
---
The example [audio_classification_UrbanSound8K.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/audio/audio_classifier_UrbanSound8K.ipynb) demonstrates integrating ClearML into a Jupyter Notebook which uses PyTorch, TensorBoard, and TorchVision to train a neural network on the UrbanSound8K dataset for audio classification. The example calls TensorBoard methods in training and testing to report scalars, audio debug samples, and spectrogram visualizations. The spectrogram visualizations are plotted by calling Matplotlib methods. In the example, we also demonstrate connecting parameters to a Task and logging them. When the script runs, it creates an experiment named `audio classification UrbanSound8K` which is associated with the `Audio Example` project.
The [audio_classification_UrbanSound8K.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/audio/audio_classifier_UrbanSound8K.ipynb) example script demonstrates integrating ClearML into a Jupyter Notebook which uses PyTorch, TensorBoard, and TorchVision to train a neural network on the UrbanSound8K dataset for audio classification. The example calls TensorBoard methods in training and testing to report scalars, audio debug samples, and spectrogram visualizations. The spectrogram visualizations are plotted by calling Matplotlib methods. In the example, we also demonstrate connecting parameters to a Task and logging them. When the script runs, it creates an experiment named `audio classification UrbanSound8K` which is associated with the `Audio Example` project.
## Scalars

View File

@ -55,7 +55,7 @@ and running, users can send Tasks to be executed on Google Colab's HW.
For additional options for running `clearml-agent`, see the [clearml-agent reference](../../clearml_agent/clearml_agent_ref.md).
After cell 4 is executed, the worker should now appear in the [**Orchestration**](../../webapp/webapp_workers_queues.md)
After executing cell 4, the worker appears in the [**Orchestration**](../../webapp/webapp_workers_queues.md)
page of your server. Clone experiments and enqueue them to your hearts content! The `clearml-agent` will fetch
experiments and execute them using the Google Colab hardware.

View File

@ -2,8 +2,8 @@
title: Remote Jupyter Tutorial
---
In this tutorial we will learn how to launch a remote interactive session on Jupyter Notebook using `clearml-session`.
We will be using two machines. A local one, where we will be using an interactive session of Jupyter, and a remote machine,
In this tutorial you will learn how to launch a remote interactive session on Jupyter Notebook using `clearml-session`.
You will be using two machines. A local one, where you will be using an interactive session of Jupyter, and a remote machine,
where a `clearml-agent` will run and spin an instance of the remote session.
## Prerequisites
@ -93,17 +93,17 @@ Now, let's execute some code in the remote session!
1. Open up a new Notebook.
1. In the first cell of the notebook, clone the [ClearML Repo](https://github.com/allegroai/clearml).
1. In the first cell of the notebook, clone the [ClearML repository](https://github.com/allegroai/clearml):
!git clone https://github.com/allegroai/clearml.git
1. In the second cell of the notebook, we are going to run this [script](https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py)
from the repository that we cloned.
1. In the second cell of the notebook, run this [script](https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py)
from the cloned repository:
%run clearml/examples/frameworks/keras/keras_tensorboard.py
Look in the script, and notice that it makes use of ClearML, Keras, and TensorFlow, but we don't need to install these
packages in Jupyter, because we specified them in the `--packages` flag of `clearml-session`.
Look in the script, and notice that it makes use of ClearML, Keras, and TensorFlow, but you don't need to install these
packages in Jupyter, because you specified them in the `--packages` flag of `clearml-session`.
### Step 5: Shut Down Remote Session

View File

@ -8,6 +8,6 @@ slug: /guides
To help learn and use ClearML, we provide example scripts that demonstrate how to use ClearML's various features.
Examples scripts are in the [examples](https://github.com/allegroai/clearml/tree/master/examples) folder of the GitHub `clearml`
repository. They are also preloaded in the **ClearML Server**:
repository. They are also preloaded in the **ClearML Server**.
Each examples folder in the GitHub ``clearml`` repository contains a ``requirements.txt`` file for example scripts in that folder.

View File

@ -33,7 +33,7 @@ Visualize the reported surface plot in **PLOTS**.
## 3D Scatter Plot
To plot a series as a 3-dimensional scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d)
To plot a series as a 3D scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d)
method.
```python
# report 3d scatter plot

View File

@ -67,7 +67,7 @@ logger.report_scatter2d(
### 3D Plots
To plot a series as a 3-dimensional scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d) method.
To plot a series as a 3D scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d) method.
```python
# report 3d scatter plot

View File

@ -4,7 +4,7 @@ title: Explicit Reporting Tutorial
In this tutorial, learn how to extend ClearML automagical capturing of inputs and outputs with explicit reporting.
In this example, we will add the following to the [pytorch_mnist.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py)
In this example, you will add the following to the [pytorch_mnist.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py)
example script from ClearML's GitHub repo:
* Setting an output destination for model checkpoints (snapshots).
@ -38,7 +38,7 @@ experiment runs. Some possible destinations include:
* Azure Storage.
Specify the output location in the `output_uri` parameter of the [`Task.init`](../../references/sdk/task.md#taskinit) method.
In this tutorial, we specify a local folder destination.
In this tutorial, specify a local folder destination.
In `pytorch_mnist_tutorial.py`, change the code from:
@ -135,7 +135,7 @@ def train(args, model, device, train_loader, optimizer, epoch):
### Plot Other (Not Scalar) Data
The script contains a function named `test`, which determines loss and correct for the trained model. We add a histogram
The script contains a function named `test`, which determines loss and correct for the trained model. Add a histogram
and confusion matrix to log them.
```python
@ -187,7 +187,7 @@ def test(args, model, device, test_loader):
### Log Text
Extend ClearML by explicitly logging text, including errors, warnings, and debugging statements. We use the [Logger.report_text](../../references/sdk/logger.md#report_text)
Extend ClearML by explicitly logging text, including errors, warnings, and debugging statements. Use the [Logger.report_text](../../references/sdk/logger.md#report_text)
method and its argument `level` to report a debugging message.
```python
@ -259,7 +259,7 @@ Supported artifacts include:
* Dictionaries - stored as JSONs
* Numpy arrays - stored as NPZ files
In the tutorial script, we upload the loss data as an artifact using the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
In the tutorial script, upload the loss data as an artifact using the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
method with metadata specified in the `metadata` parameter.
```python

View File

@ -2,7 +2,7 @@
title: Tracking Leaderboards
---
In this tutorial, we will set up a tracking leaderboard. A tracking leaderboard allows easy monitoring of experiments
In this tutorial, you will set up a tracking leaderboard. A tracking leaderboard allows easy monitoring of experiments
using a customized [experiments table](../../webapp/webapp_exp_table.md) with auto refresh for continual updates.
The tracking leaderboard can be customized to include columns with information about:

View File

@ -24,7 +24,6 @@ And thats it! This creates a [ClearML Task](../fundamentals/task.md) which ca
* Scalars logged to popular frameworks like TensorBoard
* Console output
* General details such as machine details, runtime, creation date etc.
* And more
You can view all the task details in the [WebApp](../webapp/webapp_overview.md).

View File

@ -20,7 +20,7 @@ built in logger:
1. To keep track of your experiments and/or data, ClearML needs to communicate to a server. You have 2 server options:
* Sign up for free to the [ClearML Hosted Service](https://app.clear.ml/)
* Set up your own server, see [here](../deploying_clearml/clearml_server.md).
1. Connect the ClearML SDK to the server by creating credentials (go to the top right in to UI to **Settings > Workspace > Create new credentials**),
1. Connect the ClearML SDK to the server by creating credentials (go to the top right in the UI to **Settings > Workspace > Create new credentials**),
then execute the command below and follow the instructions:
```commandline

View File

@ -31,7 +31,7 @@ segmentation, and classification. Get the most out of YOLOv8 with ClearML:
1. To keep track of your experiments and/or data, ClearML needs to communicate to a server. You have 2 server options:
* Sign up for free to the [ClearML Hosted Service](https://app.clear.ml/)
* Set up your own server, see [here](../deploying_clearml/clearml_server.md).
1. Connect the ClearML SDK to the server by creating credentials (go to the top right in to UI to **Settings > Workspace > Create new credentials**),
1. Connect the ClearML SDK to the server by creating credentials (go to the top right in the UI to **Settings > Workspace > Create new credentials**),
then execute the command below and follow the instructions:
```commandline

View File

@ -138,7 +138,7 @@ def step_one(pickle_data_url: str, extra: int = 43):
decremented by 1. If the function returns `False`, the node is not retried.
* Callbacks - Control pipeline execution flow with callback functions
* `pre_execute_callback` & `post_execute_callback` - Control pipeline flow with callback functions that can be called
* `pre_execute_callback` and `post_execute_callback` - Control pipeline flow with callback functions that can be called
before and/or after a steps execution. See [here](pipelines_sdk_tasks.md#pre_execute_callback--post_execute_callback).
* `status_change_callback` - Callback function called when the status of a step changes. Use `node.job` to access the
`ClearmlJob` object, or `node.job.task` to directly access the Task object. The signature of the function must look like this:

View File

@ -165,7 +165,7 @@ pipe.add_function_step(
outputs are used instead of launching a new task.
* `parents` Optional list of parent steps in the pipeline. The current step in the pipeline will be sent for execution
only after all the parent steps have been executed successfully.
* `pre_execute_callback` & `post_execute_callback` - Control pipeline flow with callback functions that can be called
* `pre_execute_callback` and `post_execute_callback` - Control pipeline flow with callback functions that can be called
before and/or after a steps execution. See [here](#pre_execute_callback--post_execute_callback).
* `monitor_models`, `monitor_metrics`, `monitor_artifacts` - see [here](#models-artifacts-and-metrics).

View File

@ -12,7 +12,7 @@ This release is not backwards compatible
* `preprocess` and `postprocess` class functions get 3 arguments
* Add support for per-request state storage, passing information between the pre/post-processing functions
**Features & Bug Fixes**
**Features and Bug Fixes**
* Optimize serving latency while collecting statistics
* Fix metric statistics collecting auto-refresh issue

View File

@ -11,7 +11,7 @@ This release is not backwards compatible - see notes below on upgrading
**Breaking Changes**
* Triton engine size supports variable request size (-1)
**Features & Bug Fixes**
**Features and Bug Fixes**
* Add version number of serving session task
* Triton engine support for variable request (matrix) sizes

View File

@ -78,7 +78,7 @@ in target local path [ClearML GitHub issue #709](https://github.com/allegroai/cl
**Bug Fixes**
* Fix logging dependencies that use the subdirectory argument when pip installing from a git repo [ClearML GitHub issue #946](https://github.com/allegroai/clearml/issues/946)
* Fix `Task.import_offline_session()` does not import offline models [ClearML GitHub issue #653](https://github.com/allegroai/clearml/issues/653)
* Fix `clearml-init` incorrectly sets Web and API server ports [Clearml Server GitHub issue #181](https://github.com/allegroai/clearml-server/issues/181) & [ClearML GitHub issue #910](https://github.com/allegroai/clearml/issues/910)
* Fix `clearml-init` incorrectly sets Web and API server ports [Clearml Server GitHub issue #181](https://github.com/allegroai/clearml-server/issues/181) and [ClearML GitHub issue #910](https://github.com/allegroai/clearml/issues/910)
* Fix multiple models trained by the same framework are not all automatically logged [ClearML GitHub issue #767](https://github.com/allegroai/clearml/issues/767)
* Fix parallel coordinates plot displays categorical variables unclearly [ClearML GitHub issue #907](https://github.com/allegroai/clearml/issues/907)
* Fix runtime toggling task offline mode in the context of an open task

View File

@ -15,7 +15,7 @@ title: Version 1.12
**New Features and Improvements**
* Additional UI cloud storage access options:
* Support for AWS S3 temporary access tokens [ClearML GitHub issue #200](https://github.com/allegroai/clearml-server/issues/200) & [ClearML Web GitHub issue #52](https://github.com/allegroai/clearml-web/issues/52)
* Support for AWS S3 temporary access tokens [ClearML GitHub issue #200](https://github.com/allegroai/clearml-server/issues/200) and [ClearML Web GitHub issue #52](https://github.com/allegroai/clearml-web/issues/52)
* Support credentials for private GCS buckets
* Add multiple smoothing algorithms to UI scalar plots [ClearML GitHub issue #996](https://github.com/allegroai/clearml/issues/996)
* Running average

View File

@ -9,7 +9,7 @@ title: Version 1.3
* Huggingface Transformer example
**Bug fixes**
* Fix NumPy compatibility [ClearML Serving GitHub issue #47](https://github.com/allegroai/clearml-serving/issues/47) & [#46](https://github.com/allegroai/clearml-serving/issues/46)
* Fix NumPy compatibility [ClearML Serving GitHub issue #47](https://github.com/allegroai/clearml-serving/issues/47) and [#46](https://github.com/allegroai/clearml-serving/issues/46)
* Fix Triton examples [ClearML Serving GitHub issue #48](https://github.com/allegroai/clearml-serving/issues/48)
* Add storage environment variables [ClearML Serving GitHub PR #45](https://github.com/allegroai/clearml-serving/pull/45)

View File

@ -60,7 +60,7 @@ title: Version 1.6
### ClearML Server 1.6.0
**New Features and Improvements**
* New ClearML Datasets UI pages for tracking dataset versions and exploring version lineage and contents
* Add history navigation to experiments plots UI page [ClearML GitHub issues #81](https://github.com/allegroai/clearml/issues/81) & [#255](https://github.com/allegroai/clearml/issues/255):
* Add history navigation to experiments plots UI page [ClearML GitHub issues #81](https://github.com/allegroai/clearml/issues/81) and [#255](https://github.com/allegroai/clearml/issues/255):
* Plots page shows last reported plot for each metric/variation combination
* Single plot view provides history navigation slider
* Add single value scalar reporting: Single value scalars are aggregated into a summary table in the experiments scalars

View File

@ -28,7 +28,7 @@ title: Version 1.7
### ClearML Server 1.7.0
**New Features and Improvements**
* Add “Sync comparison” to UI experiment debug samples comparison: Control metric/iteration for all compared experiments [ClearML GitHub issue #691](https://github.com/allegroai/clearml/issues/691)
* Support serving UI from a non-root path of the ClearML Server [ClearML Helm Charts issue #101](https://github.com/allegroai/clearml-helm-charts/issues/101) & [ClearML Server issue #135](https://github.com/allegroai/clearml-server/issues/135).
* Support serving UI from a non-root path of the ClearML Server [ClearML Helm Charts issue #101](https://github.com/allegroai/clearml-helm-charts/issues/101) and [ClearML Server issue #135](https://github.com/allegroai/clearml-server/issues/135).
* Add UI option for hiding “secret” experiment container arguments [ClearML Server GitHub issue #146](https://github.com/allegroai/clearml-server/issues/146)
* Add UI tables switch to detail mode through double-click [ClearML Server GitHub issue #134](https://github.com/allegroai/clearml-server/issues/134)
* Add customizable user activity timeout for UI logout

View File

@ -13,7 +13,7 @@ title: Version 1.9
**New Features and Improvements**
* Support parsing queue name when providing execution queue in pipelines code [ClearML GitHub PR #857](https://github.com/allegroai/clearml/pull/857)
* Ignore `None` values for keys in the `click` argument parser [ClearML GitHub issue #902](https://github.com/allegroai/clearml/issues/902)
* Improve docstrings for `Task.mark_completed()` and `Task.close()` - ClearML GitHub PRs [#920](https://github.com/allegroai/clearml/pull/920) & [#921](https://github.com/allegroai/clearml/pull/921)
* Improve docstrings for `Task.mark_completed()` and `Task.close()` - ClearML GitHub PRs [#920](https://github.com/allegroai/clearml/pull/920) and [#921](https://github.com/allegroai/clearml/pull/921)
* Add pre/post execution callbacks to pipeline steps through `@PipelineDecorator.component`
* Add status-change callback to pipeline steps through `PipelineController.add_step()`, `PipelineController.add_function_step()`,
and `@PipelineDecorator.component`

View File

@ -190,7 +190,7 @@ selecting items beyond the items currently on-screen:
## Creating an Experiment Leaderboard
Filter & sort the experiments of any project to create a leaderboard that can be shared and stored. This leaderboard
Filter and sort the experiments of any project to create a leaderboard that can be shared and stored. This leaderboard
updates in real time with experiment performance and outputs.
Modify the experiments table in the following ways to create a customized leaderboard: