Small edits (#775)

This commit is contained in:
pollfly 2024-02-13 09:57:46 +02:00 committed by GitHub
parent 619069771e
commit 65c3a777c3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 27 additions and 29 deletions

View File

@ -217,12 +217,12 @@ task = Task.create(
)
```
See full `Task.create` reference [here](../references/sdk/task.md#taskcreate).
For more information, see [`Task.create()`](../references/sdk/task.md#taskcreate).
## Tracking Task Progress
Track a task's progress by setting the task progress property using the [`Task.set_progress`](../references/sdk/task.md#set_progress) method.
Set a task's progress to a numeric value between 0 - 100. Access the task's current progress, using the
[`Task.get_progress`](../references/sdk/task.md#get_progress) method.
Track a task's progress by setting the task progress property using [`Task.set_progress()`](../references/sdk/task.md#set_progress).
Set a task's progress to a numeric value between 0 - 100. Access the task's current progress, using
[`Task.get_progress()`](../references/sdk/task.md#get_progress).
```python
task = Task.init(project_name="examples", task_name="Track experiment progress")
@ -273,7 +273,7 @@ The task's outputs, such as artifacts and models, can also be retrieved.
## Querying / Searching Tasks
Search and filter tasks programmatically. Input search parameters into the [`Task.get_tasks`](../references/sdk/task.md#taskget_tasks)
method, which returns a list of task objects that match the search. Pass `allow_archived=False` to filter out archived
class method, which returns a list of task objects that match the search. Pass `allow_archived=False` to filter out archived
tasks.
@ -290,7 +290,7 @@ task_list = Task.get_tasks(
)
```
It's possible to also filter tasks by passing filtering rules to `task_filter`.
You can also filter tasks by passing filtering rules to `task_filter`.
For example:
```python
@ -560,7 +560,7 @@ session folder, which can later be uploaded to the [ClearML Server](../deploying
You can enable offline mode in one of the following ways:
* Before initializing a task, use the [`Task.set_offline`](../references/sdk/task.md#taskset_offline) class method and set
the `offline_mode` argument to `True`
the `offline_mode` argument to `True`:
```python
from clearml import Task
@ -744,7 +744,7 @@ It's possible to modify the following parameters:
* Iteration number
* Model tags
Models can also be manually updated independently, without any task. See [OutputModel.update_weights](../references/sdk/model_outputmodel.md#update_weights).
Models can also be manually updated independently, without any task. See [`OutputModel.update_weights`](../references/sdk/model_outputmodel.md#update_weights).
### Using Models
@ -793,13 +793,13 @@ using [clearml-agent](../clearml_agent.md) to execute code.
#### Setting Parameters
To define parameters manually use the [`Task.set_parameters`](../references/sdk/task.md#set_parameters) method to specify
To define parameters manually use [`Task.set_parameters()`](../references/sdk/task.md#set_parameters) to specify
name-value pairs in a parameter dictionary.
Parameters can be designated into sections: specify a parameter's section by prefixing its name, delimited with a slash
(i.e. `section_name/parameter_name:value`). `General` is the default section.
Call the [`set_parameter`](../references/sdk/task.md#set_parameter) method to set a single parameter.
Call [`Task.set_parameter()`](../references/sdk/task.md#set_parameter) to set a single parameter.
```python
task = Task.init(project_name='examples', task_name='parameters')
@ -812,12 +812,12 @@ task.set_parameter(name='decay',value=0.001)
```
:::caution Overwriting Parameters
The `set_parameters` method replaces any existing hyperparameters in the task.
`Task.set_parameters()` replaces any existing hyperparameters in the task.
:::
#### Adding Parameters
To update the parameters in a task, use the [`Task.set_parameters_as_dict`](../references/sdk/task.md#set_parameters_as_dict)
method. Arguments and values are input as a dictionary. Like in `set_parameters` above, the parameter's section can
To update the parameters in a task, use [`Task.set_parameters_as_dict()`](../references/sdk/task.md#set_parameters_as_dict).
Arguments and values are input as a dictionary. Like in `set_parameters` above, the parameter's section can
be specified.
```python
@ -829,7 +829,7 @@ task.set_parameters_as_dict({'my_args/lr':0.3, 'epochs':10})
### Accessing Parameters
To access all task parameters, use the [`Task.get_parameters`](../references/sdk/task.md#get_parameters) method. This
To access all task parameters, use [`Task.get_parameters()`](../references/sdk/task.md#get_parameters). This
method returns a flattened dictionary of the `'section/parameter': 'value'` pairs.
```python
@ -853,7 +853,7 @@ The parameters and their section names are case-sensitive
### Tracking Python Objects
ClearML can track Python objects (such as dictionaries and custom classes) as they evolve in your code, and log them to
your task's configuration using the [`Task.connect`](../references/sdk/task.md#connect) method. Once objects are connected
your task's configuration using [`Task.connect()`](../references/sdk/task.md#connect). Once objects are connected
to a task, ClearML automatically logs all object elements (e.g. class members, dictionary key-values pairs).
```python

View File

@ -44,7 +44,7 @@ You can also run the agent in conda mode or poetry mode, which essentially do th
However, theres also docker mode. In this case the agent will run every incoming task in its own docker container instead of just a virtual environment. This makes things much easier if your tasks have system package dependencies for example, or when not every task uses the same python version. For our example, well be using docker mode.
Now that our configuration is ready, we can start our agent in docker mode by running the command `clearml-agent daemon docker`
Now that our configuration is ready, we can start our agent in docker mode by running the command `clearml-agent daemon docker`.
After running the command, we can see it pop up in our workers table. Now the agent will start listening for tasks in the `default` queue, and its ready to go!

View File

@ -26,7 +26,7 @@ This is the experiment manager's UI, and every row you can see here, is a single
Were currently in our project folder. As you can see, we have our very basic toy example here that we want to keep track of by using ClearMLs experiment manager.
The first thing to do is to install the `clearml` python package in our virtual environment. Installing the package itself, will add 3 commands for you. Well cover the `clearml-data` and `clearml-task` commands later. For now the one we need is `clearml-init`
The first thing to do is to install the `clearml` python package in our virtual environment. Installing the package itself, will add 3 commands for you. Well cover the `clearml-data` and `clearml-task` commands later. For now the one we need is `clearml-init`.
If you paid attention in the first video of this series, youd remember that we need to connect to a ClearML Server to save all our tracked data. The server is where we saw the list of experiments earlier. This connection is what `clearml-init` will set up for us. When running the command itll ask for your server API credentials.

View File

@ -100,10 +100,9 @@ In `slack_alerts.py`, the class `SlackMonitor` inherits from the `Monitor` class
* `process_task` - Get the information for a Task, post a Slack message, and output to console.
* Allows skipping failed Tasks, if a Task ran for few iterations. Calls [`Task.get_last_iteration`](../../references/sdk/task.md#get_last_iteration)
to get the number of iterations.
* Builds the Slack message which includes the most recent output to the console (retrieved by calling [`Task.get_reported_console_output`](../../references/sdk/task.md#get_reported_console_output)),
and the URL of the Task's output log in the ClearML Web UI (retrieved by calling [`Task.get_output_log_web_page`](../../references/sdk/task.md#get_output_log_web_page)).
* Builds the Slack message which includes the most recent output to the console (retrieved by calling [`Task.get_reported_console_output()`](../../references/sdk/task.md#get_reported_console_output)),
and the URL of the Task's output log in the ClearML Web UI (retrieved by calling [`Task.get_output_log_web_page()`](../../references/sdk/task.md#get_output_log_web_page)).
You can run the example remotely by calling the [`Task.execute_remotely`](../../references/sdk/task.md#execute_remotely)
method.
You can run the example remotely by calling [`Task.execute_remotely()`](../../references/sdk/task.md#execute_remotely).
To interface to Slack, the example uses `slack_sdk.WebClient` and `slack_sdk.errors.SlackApiError`.

View File

@ -57,8 +57,8 @@ For more information, see [Custom Metadata](custom_metadata.md).
Frames' `context_id` property facilitates grouping SingleFrames and FrameGroups. When a `context_id` is not explicitly
defined, the frame's source URI is used instead.
When you query the server for frames (e.g. with the [`DataView.get_iterator`](../references/hyperdataset/dataview.md#get_iterator)
method), the returned frames are grouped together according to their `context_id`, and within their context group are
When you query the server for frames (e.g. with [`DataView.get_iterator()`](../references/hyperdataset/dataview.md#get_iterator)),
the returned frames are grouped together according to their `context_id`, and within their context group are
ordered according to their `timestamp`.
Use the WebApp's dataset version frame browser "Group by URL" option to display a single preview for all frames with the
@ -291,8 +291,7 @@ To access a SingleFrame, the following must be specified:
To update a SingleFrame:
* Access the SingleFrame by calling [`DatasetVersion.get_single_frame()`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame)
* Make changes to the frame
* Update the frame in a DatasetVersion using the [`DatasetVersion.update_frames`](../references/hyperdataset/hyperdatasetversion.md#update_frames)
method.
* Update the frame in a DatasetVersion using [`DatasetVersion.update_frames()`](../references/hyperdataset/hyperdatasetversion.md#update_frames)
```python
frames = []

View File

@ -13,7 +13,7 @@ To select models to compare:
The comparison page opens in the **DETAILS** tab, with the models compared [side by side](#side-by-side-textual-comparison).
## Modifying Model Selection
### Modifying Model Selection
Click the `MODELS` button to view your currently compared models. Click `X` on a listed model to remove
it from the comparison.

View File

@ -2,7 +2,7 @@
title: Orchestration Dashboard
---
:::note Enterprise Feature
:::important Enterprise Feature
This feature is available under the ClearML Enterprise plan
:::

View File

@ -238,7 +238,7 @@ headings. The number of `#` signs correspond to the heading level (i.e. `#` for
| MarkDown | Rendered Output |
|---|---|
| <code># H1<br/>## H2<br/>### H3<br/>#### H4<br/>##### H5<br/>###### H6</code>|![Report headings](../img/reports_headings.png)|
| <code># H1</code><br/><code>## H2</code><br/><code>### H3</code><br/><code>#### H4</code><br/><code>##### H5</code><br/><code>###### H6</code>|![Report headings](../img/reports_headings.png)|
### Text Emphasis
@ -416,7 +416,7 @@ link:
```
The rendered output should look like this:
![Logo with title](https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg "ClearML logo")
<img src="https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg" alt="Logo with Title" title="ClearML logo"/>
Hover over the image to see its title.