Small edits (#418)

This commit is contained in:
pollfly 2022-12-26 11:08:10 +02:00 committed by GitHub
parent 9e3917bde8
commit 0addbc3549
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
37 changed files with 70 additions and 68 deletions

View File

@ -45,7 +45,7 @@ To specify your code's branch and commit ID, pass the `--branch <branch_name> --
If unspecified, `clearml-task` will use the latest commit from the 'master' branch.
:::note Github Default Branch
For Github repositories, it is recommended to explicitly specify your default branch (e.g. `--branch main`) to avoid
For GitHub repositories, it is recommended to explicitly specify your default branch (e.g. `--branch main`) to avoid
errors in identifying the correct default branch.
:::

View File

@ -462,7 +462,7 @@ Do not enqueue training or inference tasks into the services queue. They will pu
### Setting Server Credentials
Self hosted [ClearML Server](deploying_clearml/clearml_server.md) comes by default with a services queue.
Self-hosted [ClearML Server](deploying_clearml/clearml_server.md) comes by default with a services queue.
By default, the server is open and does not require username and password, but it can be [password-protected](deploying_clearml/clearml_server_security.md#user-access-security).
In case it is password-protected, the services agent will need to be configured with server credentials (associated with a user).

View File

@ -114,5 +114,5 @@ and [usage example](https://github.com/allegroai/clearml/blob/master/examples/sc
The `clearml` GitHub repository includes an [examples folder](https://github.com/allegroai/clearml/tree/master/examples)
with example scripts demonstrating how to use the various functionalities of the ClearML SDK.
These examples are pre-loaded in the [ClearML Hosted Service](https://app.clear.ml), and can be viewed, cloned,
These examples are preloaded in the [ClearML Hosted Service](https://app.clear.ml), and can be viewed, cloned,
and edited in the ClearML Web UI's `ClearML Examples` project. The examples are each explained in the [examples section](../guides/main.md).

View File

@ -126,7 +126,7 @@ auto_connect_frameworks={'tensorboard': {'report_hparams': False}}
Every `Task.init` call will create a new task for the current execution.
In order to mitigate the clutter that a multitude of debugging tasks might create, a task will be reused if:
* The last time it was executed (on this machine) was under 72 hours ago (configurable, see
`sdk.development.task_reuse_time_window_in_hours` in the [`sdk.development` section](../configs/clearml_conf.md#sdkdevelopment) of
[`sdk.development.task_reuse_time_window_in_hours`](../configs/clearml_conf.md#task_reuse) of
the ClearML configuration reference)
* The previous task execution did not have any artifacts / models
@ -463,7 +463,7 @@ class method and provide the new seed value, **before initializing the task**.
You can disable the deterministic behavior entirely by passing `Task.set_random_seed(None)`.
## Artifacts
Artifacts are the output files created by a task. ClearML uploads and logs these products so they can later be easily
Artifacts are the output files created by a task. ClearML uploads and logs these products, so they can later be easily
accessed, modified, and used.
### Logging Artifacts
@ -713,7 +713,7 @@ config_file_yaml = task.connect_configuration(
![Task configuration objects](../img/fundamentals_task_config_object.png)
### User Properties
A tasks user properties do not impact task execution so you can add / modify the properties at any stage. Add user
A tasks user properties do not impact task execution, so you can add / modify the properties at any stage. Add user
properties to a task with the [Task.set_user_properties](../references/sdk/task.md#set_user_properties) method.
```python

View File

@ -127,7 +127,7 @@ deployment process, as a single API automatically deploy (or remove) a model fro
```
1. Deploy the Inference container (if not already deployed)
1. Publish a new model the model repository in one of the following ways:
- Go to the "serving examples" project in the ClearML web UI, click on the Models Tab, search for "train sklearn model" right click and select "Publish"
- Go to the "serving examples" project in the ClearML web UI, click on the Models Tab, search for "train sklearn model" right-click and select "Publish"
- Use the RestAPI (see [details](https://clear.ml/docs/latest/docs/references/api/models#post-modelspublish_many))
- Use Python interface:

View File

@ -16,7 +16,7 @@ You can always find us at [clearml@allegro.ai](mailto:clearml@allegro.ai?subject
Read the [ClearML Blog](https://clear.ml/blog/).
Subscribe to the **ClearML** [Youtube Channel](https://www.youtube.com/c/ClearML) and view the tutorials, presentations, and discussions.
Subscribe to the **ClearML** [YouTube Channel](https://www.youtube.com/c/ClearML) and view the tutorials, presentations, and discussions.
Join us on Twitter [@clearmlapp](https://twitter.com/clearmlapp) for **ClearML** announcements and community discussions.

View File

@ -973,6 +973,8 @@ and limitations on bucket naming.
---
<a id="task_reuse"/>
**`sdk.development.task_reuse_time_window_in_hours`** (*float*)
* For development mode, the number of hours after which an experiment with the same project name and experiment name is reused.
@ -1061,7 +1063,7 @@ and limitations on bucket naming.
**`sdk.google.storage.credentials`** (*[dict]*)
* A list of dictionaries, with specific credentials per bucket and sub-directory
* A list of dictionaries, with specific credentials per bucket and subdirectory
---

View File

@ -32,7 +32,7 @@ by setting [configuration options](../configs/clearml_conf.md).
CLEARML_CONFIG_FILE = MyOtherClearML.conf
For more information about running experiments inside Docker containers, see [ClearML Agent Execution](../clearml_agent.md#execution)
For more information about running experiments inside Docker containers, see [ClearML Agent Deployment](../clearml_agent.md#deployment)
and [ClearML Agent Reference](../clearml_agent/clearml_agent_ref.md).
</div>

View File

@ -32,7 +32,7 @@ Deploying the server requires a minimum of 4 GB of memory, 8 GB is recommended.
1. Increase the memory allocation in Docker Desktop to `4GB`.
1. In the Windows notification area (system tray), right click the Docker icon.
1. In the Windows notification area (system tray), right-click the Docker icon.
1. Click **Settings** **>** **Advanced**, and then set the memory to at least `4096`.

View File

@ -321,7 +321,7 @@ task = Task.init(project_name, task_name, Task.TaskTypes.testing)
**Sometimes I see experiments as running when in fact they are not. What's going on?** <a id="experiment-running-but-stopped"></a>
ClearML monitors your Python process. When the process exits properly, ClearML closes the experiment. When the process crashes and terminates abnormally, it sometimes misses the stop signal. In this case, you can safely right click the experiment in the Web-App and abort it.
ClearML monitors your Python process. When the process exits properly, ClearML closes the experiment. When the process crashes and terminates abnormally, it sometimes misses the stop signal. In this case, you can safely right-click the experiment in the Web-App and abort it.
<br/>

View File

@ -12,8 +12,8 @@ Solutions combined with the clearml-server control plane.
![clearml architecture](../img/clearml_architecture.png)
## Youtube Playlist
## YouTube Playlist
The first video in our Youtube Getting Started playlist covers these modules in more detail, feel free to check out the video below.
The first video in our YouTube Getting Started playlist covers these modules in more detail, feel free to check out the video below.
[![Watch the video](https://img.youtube.com/vi/s3k9ntmQmD4/hqdefault.jpg)](https://www.youtube.com/watch?v=s3k9ntmQmD4&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=1)

View File

@ -47,17 +47,17 @@ However, theres also docker mode. In this case the agent will run every incom
Now that our configuration is ready, we can start our agent in docker mode by running the command `clearml-agent daemon docker`
After running the command, we can see it pop up in our workers table. Now the agent will start listening for tasks in the `default` queue and its ready to go!
After running the command, we can see it pop up in our workers table. Now the agent will start listening for tasks in the `default` queue, and its ready to go!
Let's give our workers something to do. Say you have a task that you already ran on your local machine and you tracked it using the 2 magic lines that we saw before. Just like in the last video, we can right click it and clone it, so its now in draft mode. We can easily change some of the hyperparameters on-the-fly and *enqueue* the task.
Let's give our workers something to do. Say you have a task that you already ran on your local machine, and you tracked it using the 2 magic lines that we saw before. Just like in the last video, we can right-click it and clone it, so its now in draft mode. We can easily change some of the hyperparameters on-the-fly and *enqueue* the task.
The agent will immediately detect that we enqueued a task and start working on it. Like we saw before, it will spin up a docker container, install the required packages and dependencies and run the code.
The task itself is reported to the experiment manager just like any other task, and you can browse its outputs like normal, albeit with the changed parameters we edited earlier during draft mode.
On the left we can see a button labeled “Workers and Queues”. Under the workers tab we can see that our worker is indeed busy with our task and we can see its resource utilization as well. If we click on the current experiment, we end up in our experiment view again. Now, imagine we see in the scalar output that our model isnt training the way we want it to, we can abort the task here and the agent will start working on the next task in the queue.
On the left we can see a button labeled “Workers and Queues”. Under the workers tab we can see that our worker is indeed busy with our task, and we can see its resource utilization as well. If we click on the current experiment, we end up in our experiment view again. Now, imagine we see in the scalar output that our model isnt training the way we want it to, we can abort the task here and the agent will start working on the next task in the queue.
Back to our workers overview. Over in the Queues tab, we get some extra information about which experiments are currently in the queue and we can even change their order by dragging them in the correct position like so. Finally, we have graphs of the overall waiting time and overall amount of enqueued tasks over time.
Back to our workers overview. Over in the Queues tab, we get some extra information about which experiments are currently in the queue, and we can even change their order by dragging them in the correct position like so. Finally, we have graphs of the overall waiting time and overall amount of enqueued tasks over time.
Talking of which, lets say your wait times are very long because all data scientists have collectively decided that now is a perfect time to train their models and your on-premise servers are at capacity. We have built-in autoscalers for AWS and GCP (in the works) which will automatically spin up new `clearml-agent` VMs when the queue wait time becomes too long. If you go for the premium tiers of ClearML, youll even get a really nice dashboard to go along with it.

View File

@ -30,7 +30,7 @@ After running `pip install clearml` we can add 2 simple lines of python code to
The pip package also includes `clearml-data`. It can help you keep track of your ever-changing datasets and provides an easy way to store, track and version control your data. Its also an easy way to share your dataset with colleagues over multiple machines while keeping track of who has which version. ClearML Data can even keep track of your datas ancestry, making sure you can always figure out where specific parts of your data came from.
Both the 2 magic lines and the data tool will send all of their information to a ClearML server. This server then keeps an overview of your experiment runs and data sets over time, so you can always go back to a previous experiment, see how it was created and even recreate it exactly. Keep track of your best models by creating leaderboards based on your own metrics and you can even directly compare multiple experiment runs, helping you to figure out the best way forward for your models.
Both the 2 magic lines and the data tool will send all of their information to a ClearML server. This server then keeps an overview of your experiment runs and data sets over time, so you can always go back to a previous experiment, see how it was created and even recreate it exactly. Keep track of your best models by creating leaderboards based on your own metrics, and you can even directly compare multiple experiment runs, helping you to figure out the best way forward for your models.
To get started with a server right away, you can make use of the free tier. And when your needs grow, weve got you covered too! Just check out our website to find a tier that fits your organisation best. But, because were open source, you can also host your own completely for free. We have AWS images, Google Cloud images, you can run it on docker-compose locally or even, if you really hate yourself, run it on a self-hosted kubernetes cluster using our helm charts.
@ -40,7 +40,7 @@ The `clearml-agent` is a daemon that you can run on 1 or multiple machines and t
Now that we have this remote execution capability, the possibilities are near endless.
For example, Its easy to set up an agent on a either a CPU or a GPU machine, so you can easily run all of your experiments on any compute resource you have available. And if you spin up your agents in the cloud, theyll even support auto scaling out of the box.
For example, Its easy to set up an agent on either a CPU or a GPU machine, so you can easily run all of your experiments on any compute resource you have available. And if you spin up your agents in the cloud, theyll even support auto scaling out of the box.
You can set up multiple machines as agents to support large teams with their complex projects and easily configure a queuing system to get the most out of your available hardware.
@ -48,7 +48,7 @@ Talking about using multiple machines, say you have an experiment and want to op
You can even use a Google Colab instance as a ClearML Agent to get free GPU power, just sayin!
As a final example of how you could use the agent's functionality, ClearML provides a `PipelineController`, which allows you to chain together tasks by plugging the output of one task as the input of another. Each of the tasks are of course run on your army of agents for full automation.
As a final example of how you could use the agent's functionality, ClearML provides a `PipelineController`, which allows you to chain together tasks by plugging the output of one task as the input of another. Each of the tasks is of course run on your army of agents for full automation.
As you can see ClearML is a large toolbox, stuffed with the most useful components for both data scientists and MLOps engineers. Were diving deeper into each component in the following videos if you need more details, but feel free to get started now at clear.ml.

View File

@ -60,7 +60,7 @@ Next to automatic logging, it is super easy to manually add anything you want to
Just take a look at our documentation for more info.
If you want to show colleagues or friends how well your models are performing, you can easily share a task by right clicking it and choosing share to make it accessible with a link. Anyone visiting that link will get the detail view in fullscreen mode and the task itself will get a tag showing that its now shared.
If you want to show colleagues or friends how well your models are performing, you can easily share a task by right-clicking it and choosing share to make it accessible with a link. Anyone visiting that link will get the detail view in fullscreen mode and the task itself will get a tag showing that its now shared.
In many cases, we also want to compare multiple versions of our experiments directly, this is easily done by selecting the tasks youre interested in and clicking on compare in the bottom ribbon.

View File

@ -48,7 +48,7 @@ I've collapsed a lot of the functions here so that it's a lot easier to take a l
I'm going through these files is the `Task.init` command and essentially this is what ClearML uses to keep track of every
time you run this specific script. So you'll see it in `get_data.py`, you'll see it in `preprocessing.py`, and you'll
see it in `training.py` as well. And so this line is all you need to get started. It will already start capturing
everything that you'll need and that the program produces like plots or hyper parameters, you name it.
everything that you'll need and that the program produces like plots or hyperparameters, you name it.
So let's take a look in depth first at what `get_data.py` does for me. So getting data is very simple, but what I used
to do is I would get the data from like a remote location, You download a zip file or whatever, and then you extract it
@ -69,8 +69,8 @@ don't change the name, you overwrite it. so that's all the thing of the past. No
it to you later in the UI, we have a nice and clear overview of all of the different versions.
I'll add some dataset statistics that's also something you can do and ClearML is just add some, for example, class
distribution or other kind of plots that could be interesting and then I'm actually building the ClearML dataset here.
Also, an an extra thing that is really really useful if you use ClearML datasets is you can actually share it as well.
distribution or other kind of plots that could be interesting, and then I'm actually building the ClearML dataset here.
Also, an an extra thing that is really, really useful if you use ClearML datasets is you can actually share it as well.
So not only with colleagues and friends, for example. You can share the data with them, and they can add to the data, and
always you will always have the latest version, you will always know what happened before that.
@ -199,7 +199,7 @@ learning something, it's doing something so that actually is very interesting.
And then you have debug samples as well, which you can use to show actually whatever kind of media you need. So these
are for example, the images that I generated that are the mel spectrograms so that the preprocessing outputs uh, and you
can just show them here with the name of what the label was and what to predict it was. So I can just have a very quick
overview of how this is working and then I can actually even do it with audio samples as well. So I can for example here
overview of how this is working, and then I can actually even do it with audio samples as well. So I can for example here
say this is labeled "dog", and it is predicted as "children playing". So then I can listen to it and get an idea on, is
this correct? Is it not correct? In this case, obviously it's not correct, but then I can go further into the iterations
and then hopefully it will get better and better over time. But this is a quick way that I can just validate that what
@ -253,7 +253,7 @@ also use these differences to then go back to the original code.
Of course, hyperparameters. There weren't any differences. We didn't actually change any of the hyperparameters here,
but if we did, that would also be highlighted in red in this section. So if we're going to look at the scalars, this is
where it gets really interesting because now the plots are overlaid on top of each other and you can change the color
where it gets really interesting because now the plots are overlaid on top of each other, and you can change the color
if you don't if you don't like the color. I think green is a bit ugly. So let's take red for example. We can just
change that here. And then we have a quick overview of two different compared experiments and then how their scalars did
over time. And because they have the same X-axis the iterations, we can actually compare them immediately to each other,
@ -311,7 +311,7 @@ us the full range of experiments that we trained this way on the full dataset, a
it got the most or the highest F1 score on the subset, we don't actually have the highest score on the full dataset yet.
However, even though it is not the best model, it might be interesting to get a colleague or a friend to take a look at
it and see what we could do better or just show off the new model that you made. So the last thing I want to show you is
that you can now easily click it, right click, and then go to share, and you can share it publicly. If you create a
that you can now easily click it, right-click, and then go to share, and you can share it publicly. If you create a
link, you can send this link to your friend, colleague, whatever, and they will be able to see the complete details of
the whole experiment, of everything you did, you can see the graphs, they can see the hyperparameters, and I can help
you find the best ways forward for your own models.

View File

@ -22,9 +22,9 @@ keywords: [mlops, components, hyperparameter optimization, hyperparameter]
<div className="cml-expansion-panel-content">
Hello and welcome to ClearML. In this video well take a look at one cool way of using the agent other than rerunning a task remotely: hyperparameter optimization.
By now, we know that ClearML can easily capture our hyperparameters and scalars as part of the experiment tracking. We also know we can clone any task and change its hyperparameters so theyll be injected into the original code at runtime. In the last video, we learnt how to make a remote machine execute this task automatically by using the agent.
By now, we know that ClearML can easily capture our hyperparameters and scalars as part of the experiment tracking. We also know we can clone any task and change its hyperparameters, so theyll be injected into the original code at runtime. In the last video, we learnt how to make a remote machine execute this task automatically by using the agent.
Soooo… Can we just clone a task like a 100 times, inject different hyperparameters in every clone, run the clones on 10 agents and then sort the results based on a specific scalar?
Soooo… Can we just clone a task like 100 times, inject different hyperparameters in every clone, run the clones on 10 agents and then sort the results based on a specific scalar?
Yeah, yeah we can, it's called hyperparameter optimization. And we can do all of this automatically too! No way you were going to clone and edit those 100 tasks yourself, right?

View File

@ -47,7 +47,7 @@ The structure of your pipeline will be derived from looking at this `parents` ar
Now we do the same for the final step. However, remember the empty hyperparameters we saw before? We still have to overwrite these. We can use the `parameter_override` argument to do just that.
For example we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task.
For example, we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task.
And thats it! We now have our first pipeline!

View File

@ -57,7 +57,7 @@ After filling in all these settings, lets launch the autoscaler now, so we ca
We immediately start in the autoscaler dashboard, and we can see the amount of machines that are running, the amount that are doing nothing, how many machines we have available per queue and all the autoscaler logs. Right now we have no machines running at all because our queues are empty.
So if we go to one of our projects, clone these tasks here, and then enqueue them in the CPU queue and clone this task here as well. We can edit the parameters like we saw before and even change which container it should be run in. We then enqueue it in the GPU queue and we should now see the autoscaler kicking into action.
So if we go to one of our projects, clone these tasks here, and then enqueue them in the CPU queue and clone this task here as well. We can edit the parameters like we saw before and even change which container it should be run in. We then enqueue it in the GPU queue, and we should now see the autoscaler kicking into action.
The autoscaler has detected the tasks in the queue and has started booting up remote machines to process them. We can follow along with the process in our autoscaler dashboard.

View File

@ -31,7 +31,7 @@ By doubling clicking a thumbnail, you can view a spectrogram plot in the image v
## Hyperparameters
ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task using
a call to the [Task.connect](../../../../../references/sdk/task.md#connect) method.
a call to the [`Task.connect`](../../../../../references/sdk/task.md#connect) method.
configuration_dict = {'number_of_epochs': 10, 'batch_size': 4, 'dropout': 0.25, 'base_lr': 0.001}
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml

View File

@ -17,7 +17,7 @@ ClearML automatically logs the audio samples which the example reports by callin
### Audio Samples
You can play the audio samples by double clicking the audio thumbnail.
You can play the audio samples by double-clicking the audio thumbnail.
![image](../../../../../img/examples_audio_preprocessing_example_03.png)

View File

@ -98,7 +98,7 @@ Since the arguments dictionary is connected to the Task, after the code runs onc
to optimize a different experiment.
```python
# experiment template to optimize in the hyper-parameter optimization
# experiment template to optimize in the hyperparameter optimization
args = {
'template_task_id': None,
'run_as_service': False,

View File

@ -18,7 +18,7 @@ Configure ClearML for uploading artifacts to any of the supported types of stora
S3 buckets, Google Cloud Storage, and Azure Storage ([debug sample storage](../../references/sdk/logger.md#set_default_upload_destination)
is different). Configure ClearML in any of the following ways:
* In the configuration file, set [default_output_uri](../../configs/clearml_conf.md#sdkdevelopment).
* In the configuration file, set [default_output_uri](../../configs/clearml_conf.md#config_default_output_uri).
* In code, when [initializing a Task](../../references/sdk/task.md#taskinit), use the `output_uri` parameter.
* In the **ClearML Web UI**, when [modifying an experiment](../../webapp/webapp_exp_tuning.md#output-destination).
@ -96,7 +96,7 @@ task.upload_artifact(
### Dictionaries
```python
# add and upload dictionary stored as JSON)
# add and upload dictionary stored as JSON
task.upload_artifact('dictionary', df.to_dict())
```

View File

@ -24,7 +24,7 @@ Clone the experiment to create an editable copy for tuning.
1. In the **ClearML Web-App (UI)**, on the Projects page, click the `examples` project card.
1. In the experiments table, right click the experiment `pytorch mnist train`.
1. In the experiments table, right-click the experiment `pytorch mnist train`.
1. In the context menu, click **Clone** **>** **CLONE**. The newly cloned experiment appears and its info panel slides open.
@ -82,7 +82,7 @@ Run the worker daemon on the local development machine.
Enqueue the tuned experiment.
1. In the **ClearML Web-App (UI)**, experiments table, right click the experiment `Clone Of pytorch mnist train`.
1. In the **ClearML Web-App (UI)**, experiments table, right-click the experiment `Clone Of pytorch mnist train`.
1. In the context menu, click **Enqueue**.

View File

@ -28,7 +28,7 @@ In tree view, parent versions that do not match the query where a child version
### Version Actions
Access dataset version actions, by right clicking a version, or through the menu button <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" /> (available on hover).
Access dataset version actions, by right-clicking a version, or through the menu button <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" /> (available on hover).
* **Rename** - Change the versions name
* **Create New Version** - Creates a child version of a *Published* dataset version. The new version is created in a *draft*

View File

@ -12,7 +12,7 @@ View the Dataviews table in table view <img src="/docs/latest/icons/ico-table-vi
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
using the buttons on the top left of the page. Use the table view for a comparative view of your Dataviews according to
columns of interest. Use the details view to access a selected Dataviews details, while keeping the Dataview list in view.
Details view can also be accessed by double clicking a specific Dataview in the table view to open its details view.
Details view can also be accessed by double-clicking a specific Dataview in the table view to open its details view.
![Dataviews table](../../img/hyperdatasets/webapp_dataviews_table.png)
@ -38,7 +38,7 @@ Save customized settings in a browser bookmark, and share the URL with teammates
Customize the table using any of the following:
* Dynamic column order - Drag a column title to a different position.
* Resize columns - Drag the column separator to change the width of that column. Double click the column separator for automatic fit.
* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for automatic fit.
* Filter by user and / or status - When a filter is applied to a column, its filter icon will appear with a highlighted
dot on its top right (<img src="/docs/latest/icons/ico-filter-on.svg" alt="Filter on" className="icon size-md" /> ). To
clear all active filters, click <img src="/docs/latest/icons/ico-filter-reset.svg" alt="Clear filters" className="icon size-md" />
@ -46,7 +46,7 @@ Customize the table using any of the following:
* Sort columns - By experiment name and / or elapsed time since creation.
:::note
The following Dataviews-table customizations are saved on a **per project** basis:
The following Dataviews-table customizations are saved on a **per-project** basis:
* Column order
* Column width
* Active sort order
@ -62,7 +62,7 @@ all the Dataviews in the project. The customizations of these two views are save
The following table describes the actions that can be performed from the Dataviews table.
Access these actions with the context menu in any of the following ways:
* In the Dataviews table, right click a Dataview, or hover over a Dataview and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In the Dataviews table, right-click a Dataview, or hover over a Dataview and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In a Dataview info panel, click the menu button <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bar menu" className="icon size-md space-sm" />
| ClearML Action | Description |

View File

@ -56,7 +56,7 @@ Do not reuse an experiment with artifacts.
Minor bug fixes and improvements
* Add resource monitoring.
* Fix Web UI compare plots ([Github Issue #55](https://github.com/allegroai/clearml/issues/55)).
* Fix Web UI compare plots ([GitHub Issue #55](https://github.com/allegroai/clearml/issues/55)).
* Improve server upgrade checks/messages.
### Trains Agent

View File

@ -31,7 +31,7 @@ This release is not backwards compatible - see notes below on upgrading
**Features**
- Add `Task.force_store_standalone_script()` to force storing standalone script instead of a Git repository reference [ClearML Github issue #340](https://github.com/allegroai/clearml/issues/340)
- Add `Task.force_store_standalone_script()` to force storing standalone script instead of a Git repository reference [ClearML GitHub issue #340](https://github.com/allegroai/clearml/issues/340)
- Add `Logger.set_default_debug_sample_history()` and `Logger.get_default_debug_sample_history()` to allow controlling
maximum debug samples programmatically
- Add populate now stores function arg types as part of the hyperparemeters
@ -40,8 +40,8 @@ This release is not backwards compatible - see notes below on upgrading
**Bug Fixes**
- Fix and upgrade the SlackMonitor [ClearML Github issue #533](https://github.com/allegroai/clearml/issues/533)
- Fix network issues causing Task to stop on status change when no status change has occurred [ClearML Github issue #535](https://github.com/allegroai/clearml/issues/535)
- Fix and upgrade the SlackMonitor [ClearML GitHub issue #533](https://github.com/allegroai/clearml/issues/533)
- Fix network issues causing Task to stop on status change when no status change has occurred [ClearML GitHub issue #535](https://github.com/allegroai/clearml/issues/535)
- Fix Pipeline controller function support for dict as input argument
- Fix uploading the same metric/variant from multiple processes in threading mode should create a unique file per process (since global counter is not passed between the subprocesses)
- Fix resource monitoring should only run in the main process when using threaded logging mode

View File

@ -48,7 +48,7 @@ for user/password when cloning/fetching repositories)
### ClearML SDK 1.5.0
**New Features and Improvements**
* Add support for single value metric reporting ClearML GitHub issue [ClearML Github issue #400](https://github.com/allegroai/clearml/issues/400)
* Add support for single value metric reporting ClearML GitHub issue [ClearML GitHub issue #400](https://github.com/allegroai/clearml/issues/400)
* Add support for specifying parameter sections in `PipelineDecorator` [ClearML GitHub issue #629](https://github.com/allegroai/clearml/issues/629)
* Add support for parallel uploads and downloads (upload / download and zip / unzip of artifacts)
* Add support for specifying execution details (repository, branch, commit, packages, image) in `PipelineDecorator`

View File

@ -47,7 +47,7 @@ The prefilled configuration wizard can be edited before launching the new app in
:::
## App Instance Actions
Access app instance actions, by right clicking an instance, or through the menu button <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" /> (available on hover).
Access app instance actions, by right-clicking an instance, or through the menu button <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" /> (available on hover).
![App context menu](../../img/app_context_menu.png)

View File

@ -10,7 +10,7 @@ View the runs table in table view <img src="/docs/latest/icons/ico-table-view.sv
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
using the buttons on the top left of the page. Use the table view for a comparative view of your runs according to
columns of interest. Use the details view to access a selected runs details, while keeping the pipeline runs list in view.
Details view can also be accessed by double clicking a specific pipeline run in the table view to open its details view.
Details view can also be accessed by double-clicking a specific pipeline run in the table view to open its details view.
![Pipeline runs table](../../img/webapp_pipeline_runs_table.png)
@ -39,7 +39,7 @@ Customize the table using any of the following:
to view and select columns to show. Click **Metric** and **Hyper Parameter** to add the respective custom columns
* [Filter columns](#filtering-columns)
* Sort columns
* Resize columns - Drag the column separator to change the width of that column. Double click the column separator for
* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for
automatic fit.
Changes are persistent (cached in the browser) and represented in the URL, so customized settings can be saved in a
@ -70,7 +70,7 @@ To clear all active filters, click <img src="/docs/latest/icons/ico-filter-reset
in the top right corner of the table.
:::note
The following table customizations are saved on a per pipeline basis:
The following table customizations are saved on a per-pipeline basis:
* Columns order
* Column width
* Active sort order
@ -95,7 +95,7 @@ The following table describes the actions that can be done from the run table, i
that allow each operation.
Access these actions with the context menu in any of the following ways:
* In the pipeline runs table, right click a run, or hover over a pipeline and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In the pipeline runs table, right-click a run, or hover over a pipeline and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In a pipeline info panel, click the menu button <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bar menu" className="icon size-md space-sm" />
| Action | Description | States Valid for the Action | State Transition |

View File

@ -16,7 +16,7 @@ When archiving an experiment:
* Archive an experiment or model from either the:
* Experiments or models table - Right click the experiment or model **>** **Archive**.
* Experiments or models table - Right-click the experiment or model **>** **Archive**.
* Info panel or full screen details view - Click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bars menu" className="icon size-sm space-sm" /> (menu) **>** **Archive**.
* Archive multiple experiments or models from the:

View File

@ -18,7 +18,7 @@ Experiments can also be modified and then executed remotely, see [Tuning Experim
* On the Dashboard, click a recent experiment, project card, or **VIEW ALL** and then click a project card.
* On the Projects page, click project card, or the **All projects** card.
1. Reproduce the experiment. In the experiments table, right click and then either:
1. Reproduce the experiment. In the experiments table, right-click and then either:
* Clone (make an exact copy)

View File

@ -22,7 +22,7 @@ Share experiments from the experiments table, the info panel menu, and/or the fu
1. Click **Share** in one of these ways:
* The experiment table - Right click the experiment **>** **Share**
* The experiment table - Right-click the experiment **>** **Share**
* The info panel or full screen details view - Click the experiment **>** <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
(menu) **>** **Share**.

View File

@ -10,7 +10,7 @@ View the experiments table in table view <img src="/docs/latest/icons/ico-table-
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
using the buttons on the top left of the page. Use the table view for a comparative view of your experiments according
to columns of interest. Use the details view to access a selected experiments details, while keeping the experiment list
in view. Details view can also be accessed by double clicking a specific experiment in the table view to open its details view.
in view. Details view can also be accessed by double-clicking a specific experiment in the table view to open its details view.
:::info
To assist in focusing on active experimentation, experiments and models can be archived, so they will not appear
@ -45,7 +45,7 @@ The experiments table default and customizable columns are described in the foll
Customize the table using any of the following:
* Dynamic column order - Drag a column title to a different position.
* Resize columns - Drag the column separator to change the width of that column. Double click the column separator for
* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for
automatic fit.
* Changing table columns
* Show / hide columns - Click <img src="/docs/latest/icons/ico-settings.svg" alt="Setting Gear" className="icon size-md" />
@ -68,7 +68,7 @@ Changes are persistent (cached in the browser), and represented in the URL so cu
bookmark and shared with other ClearML users to collaborate.
:::note
The following experiments-table customizations are saved on a **per project** basis:
The following experiments-table customizations are saved on a **per-project** basis:
* Columns order
* Column width
* Active sort order
@ -132,7 +132,7 @@ The following table describes the actions that can be done from the experiments
that allow each operation.
Access these actions with the context menu in any of the following ways:
* In the experiments table,right click an experiment or hover over an experiment and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In the experiments table, right-click an experiment or hover over an experiment and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In an experiment info panel, click the menu button <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bar menu" className="icon size-md space-sm" />
| Action | Description | States Valid for the Action | State Transition |

View File

@ -9,7 +9,7 @@ View the models table in table view <img src="/docs/latest/icons/ico-table-view.
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
using the buttons on the top left of the page. Use the table view for a comparative view of your models according to
columns of interest. Use the details view to access a selected models details, while keeping the model list in view.
Details view can also be accessed by double clicking a specific model in the table view to open its details view.
Details view can also be accessed by double-clicking a specific model in the table view to open its details view.
![Models table](../img/webapp_models_01.png)
@ -39,7 +39,7 @@ can be saved in a browser bookmark and shared with other ClearML users to collab
Customize the table using any of the following:
* Dynamic column order - Drag a column title to a different position.
* Resize columns - Drag the column separator to change the width of that column. Double click the column separator for
* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for
automatic fit.
* Changing table columns
* Show / hide columns - Click <img src="/docs/latest/icons/ico-settings.svg" alt="Setting Gear" className="icon size-md" />
@ -51,7 +51,7 @@ Customize the table using any of the following:
* Sort columns - By metadata, ML framework, description, and last update elapsed time.
:::note
The following models-table customizations are saved on a **per project** basis:
The following models-table customizations are saved on a **per-project** basis:
* Columns order
* Column width
* Active sort order
@ -68,12 +68,12 @@ The following table describes the actions that can be done from the models table
allow each feature. Model states are *Draft* (editable) and *Published* (read-only).
Access these actions with the context menu in any of the following ways:
* In the models table, right click a model, or hover over a model and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In the models table, right-click a model, or hover over a model and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In a model's info panel, click the menu button <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bar menu" className="icon size-md space-sm" />
| ClearML Action | Description | States Valid for the Action |
|---|---|--|
| Details | View model details, which include general information, the model configuration, and label enumeration. Can also be accessed by double clicking a model in the models table | Any state |
| Details | View model details, which include general information, the model configuration, and label enumeration. Can also be accessed by double-clicking a model in the models table | Any state |
| Publish | Publish a model to prevent changes to it. *Published* models are read-only. If a model is Published, its experiment also becomes Published (read-only). | *Draft* |
| Archive | To more easily work with active models, move a model to the archive. See [Archiving](webapp_archiving.md). | Any state |
| Restore | Action available in the archive. Restore a model to the active model table. | Any state |

View File

@ -2,7 +2,7 @@
title: Model Details
---
In the models table, double click on a model to view and / or modify the following:
In the models table, double-click on a model to view and / or modify the following:
* General model information
* Model configuration
* Model label enumeration

View File

@ -144,7 +144,7 @@ file entries will be overridden by the vault values.
Fill in values using any of ClearML supported configuration formats: HOCON / JSON / YAML.
**To edit vault contents:**
1. Click **EDIT** or double click the vault box
1. Click **EDIT** or double-click the vault box
1. Insert / edit the configurations in the vault
1. Press **OK**