Small edits (#906)

This commit is contained in:
pollfly 2024-08-25 13:50:12 +03:00 committed by GitHub
parent 1ed353020f
commit a943bbd39a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
17 changed files with 36 additions and 37 deletions

View File

@ -259,7 +259,7 @@ dataset.get_logger().report_histogram(
## Uploading Files
To upload the dataset files to network storage, use the [`Dataset.upload`](../references/sdk/dataset.md#upload) method.
To upload the dataset files to network storage, use [`Dataset.upload()`](../references/sdk/dataset.md#upload).
Use the `output_url` parameter to specify storage target, such as S3 / GS / Azure. For example:
* A shared folder: `/mnt/share/folder`
@ -319,7 +319,7 @@ Dataset.delete(
```
## Renaming Datasets
Rename a dataset using the [`Dataset.rename`](../references/sdk/dataset.md#datasetrename) class method. All the datasets
Rename a dataset using the [`Dataset.rename()`](../references/sdk/dataset.md#datasetrename) class method. All the datasets
with the given `dataset_project` and `dataset_name` will be renamed.
```python
@ -331,7 +331,7 @@ Dataset.rename(
```
## Moving Datasets to Another Project
Move a dataset to another project using the [`Dataset.move_to_project`](../references/sdk/dataset.md#datasetmove_to_projetc)
Move a dataset to another project using the [`Dataset.move_to_project()`](../references/sdk/dataset.md#datasetmove_to_projetc)
class method. All the datasets with the given `dataset_project` and `dataset_name` will be moved to the new dataset
project.

View File

@ -196,7 +196,7 @@ Pass one of the following in the `continue_last_task` parameter:
iteration after the last reported one. Pass `0`, to disable the automatic last iteration offset. To also specify a
task ID, use the `reuse_last_task_id` parameter.
You can also continue a task previously executed in offline mode, using the `Task.import_offline_session` method.
You can also continue a task previously executed in offline mode, using `Task.import_offline_session()`.
See [Offline Mode](#offline-mode).
### Empty Task Creation
@ -263,7 +263,7 @@ A task can be identified by its project and name, and by a unique identifier (UU
a task can be changed after an experiment has been executed, but its ID can't be changed.
Programmatically, task objects can be retrieved by querying the system based on either the task ID or a project and name
combination using the [`Task.get_task`](../references/sdk/task.md#taskget_task) class method. If a project / name
combination using the [`Task.get_task()`](../references/sdk/task.md#taskget_task) class method. If a project / name
combination is used, and multiple tasks have the exact same name, the function will return the *last modified task*.
For example:
@ -283,7 +283,7 @@ The task's outputs, such as artifacts and models, can also be retrieved.
## Querying / Searching Tasks
Search and filter tasks programmatically. Input search parameters into the [`Task.get_tasks`](../references/sdk/task.md#taskget_tasks)
Search and filter tasks programmatically. Input search parameters into the [`Task.get_tasks()`](../references/sdk/task.md#taskget_tasks)
class method, which returns a list of task objects that match the search. Pass `allow_archived=False` to filter out archived
tasks.
@ -570,7 +570,7 @@ You can work with tasks in Offline Mode, in which all the data and logs that the
session folder, which can later be uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md).
You can enable offline mode in one of the following ways:
* Before initializing a task, use the [`Task.set_offline`](../references/sdk/task.md#taskset_offline) class method and set
* Before initializing a task, use the [`Task.set_offline()`](../references/sdk/task.md#taskset_offline) class method and set
the `offline_mode` argument to `True`:
```python
@ -607,7 +607,7 @@ Upload the execution data that the Task captured offline to the ClearML Server u
```
Pass the path to the zip folder containing the captured information with the `--import-offline-session` parameter
* [`Task.import_offline_session`](../references/sdk/task.md#taskimport_offline_session) class method
* [`Task.import_offline_session()`](../references/sdk/task.md#taskimport_offline_session) class method
```python
from clearml import Task
@ -903,7 +903,7 @@ This method saves configuration objects as blobs (i.e. ClearML is not aware of t
```python
# connect a configuration dictionary
model_config_dict = {
'value': 13.37, 'dict': {'sub_value': 'string'}, 'list_of_ints': [1, 2, 3, 4],
'value': 13.37, 'dict': {'sub_value': 'string'}, 'list_of_ints': [1, 2, 3, 4],
}
model_config_dict = task.connect_configuration(
name='dictionary', configuration=model_config_dict

View File

@ -455,7 +455,7 @@ You cannot undo the deletion of a ClearML object.
#### Can I change the random seed my experiment uses?
Yes! By default, ClearML initializes Tasks with an initial seed of `1337` to ensure reproducibility. To set a different
value for your task, use the [`Task.set_random_seed`](references/sdk/task.md#taskset_random_seed) class method and
value for your task, use the [`Task.set_random_seed()`](references/sdk/task.md#taskset_random_seed) class method and
provide the new seed value, **before initializing the task**.
You can disable the deterministic behavior entirely by passing `Task.set_random_seed(None)`.
@ -557,7 +557,7 @@ Yes! You can use ClearML's Offline Mode, in which all the data and logs that a t
local folder.
You can enable offline mode in one of the following ways:
* Before initializing a task, use the [`Task.set_offline`](references/sdk/task.md#taskset_offline) class method and set
* Before initializing a task, use the [`Task.set_offline()`](references/sdk/task.md#taskset_offline) class method and set
the `offline_mode` argument to `True`
* Before running a task, set `CLEARML_OFFLINE_MODE=1`
@ -578,7 +578,7 @@ ClearML Task: Offline session stored in /home/user/.clearml/cache/offline/b78684
In order to upload to the ClearML Server the execution data that the Task captured offline, do one of the
following:
* Use the `import-offline-session <session_path>` option of the [clearml-task](apps/clearml_task.md) CLI
* Use the [`Task.import_offline_session`](references/sdk/task.md#taskimport_offline_session) method.
* Use the [`Task.import_offline_session()`](references/sdk/task.md#taskimport_offline_session) method.
See [Storing Task Data Offline](guides/set_offline.md).
@ -627,7 +627,7 @@ tutorial.
#### How can I report more than one scatter 2D series on the same plot? <a id="multiple-scatter2D"></a>
The [`Logger.report_scatter2d`](references/sdk/logger.md#report_scatter2d)
The [`Logger.report_scatter2d()`](references/sdk/logger.md#report_scatter2d)
method reports all series with the same `title` and `iteration` parameter values on the same plot.
For example, the following two scatter2D series are reported on the same plot, because both have a `title` of `example_scatter` and an `iteration` of `1`:

View File

@ -46,7 +46,7 @@ Projects can also be created using the [`projects.create`](../references/api/pro
### View All Projects in System
To view all projects in the system, use the `Task.get_projects` class method:
To view all projects in the system, use the [`Task.get_projects()`](../references/sdk/task.md#taskgetprojects) class method:
```python
project_list = Task.get_projects()

View File

@ -63,7 +63,7 @@ pip install clearml
page, click **Create new credentials**.
The **LOCAL PYTHON** tab shows the data required by the setup wizard (a copy to clipboard action is available on
hover)
hover).
1. At the command prompt `Paste copied configuration here:`, copy and paste the ClearML credentials.
The setup wizard confirms the credentials.

View File

@ -3,7 +3,7 @@ title: Remote Execution
---
The [execute_remotely_example](https://github.com/allegroai/clearml/blob/master/examples/advanced/execute_remotely_example.py)
script demonstrates the use of the [`Task.execute_remotely`](../../references/sdk/task.md#execute_remotely) method.
script demonstrates the use of the [`Task.execute_remotely()`](../../references/sdk/task.md#execute_remotely) method.
:::note
Make sure to have at least one [ClearML Agent](../../clearml_agent.md) running and assigned to listen to the `default` queue:

View File

@ -45,7 +45,7 @@ optimizer = HyperParameterOptimizer(
# Configuring optimization parameters
execution_queue='dan_queue', # queue to schedule the experiments for execution
max_number_of_concurrent_tasks=2, # number of concurrent experiments
optimization_time_limit=60., # set the time limit for the optimization process
optimization_time_limit=60, # set the time limit for the optimization process
compute_time_limit=120, # set the compute time limit (sum of execution time on all machines)
total_max_jobs=20, # set the maximum number of experiments for the optimization.
# Converted to total number of iteration for OptimizerBOHB

View File

@ -3,8 +3,7 @@ title: HTML Reporting
---
The [html_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py) example
demonstrates reporting local HTML files and HTML by URL, using the [Logger.report_media](../../references/sdk/logger.md#report_media)
method.
demonstrates reporting local HTML files and HTML by URL using [`Logger.report_media()`](../../references/sdk/logger.md#report_media).
ClearML reports these HTML debug samples in the **ClearML Web UI** **>** experiment details **>**
**DEBUG SAMPLES** tab.
@ -31,7 +30,7 @@ Logger.current_logger().report_media(
## Reporting HTML Local Files
Report the following using the `Logger.report_media` parameter method `local_path` parameter:
Report the following using `Logger.report_media()`'s `local_path` parameter:
* [Interactive HTML](#interactive-html)
* [Bokeh GroupBy HTML](#bokeh-groupby-html)
* [Bokeh Graph HTML](#bokeh-graph-html)

View File

@ -54,8 +54,8 @@ TensorFlow Definitions appear in **HYPEPARAMETERS** **>** **TF_DEFINE**.
## Parameter Dictionaries
Connect a parameter dictionary to a Task by calling the [`Task.connect`](../../references/sdk/task.md#connect)
method, and ClearML logs the parameters. ClearML also tracks changes to the parameters.
Connect a parameter dictionary to a Task by calling [`Task.connect()`](../../references/sdk/task.md#connect),
and ClearML logs the parameters. ClearML also tracks changes to the parameters.
```python
parameters = {

View File

@ -5,7 +5,7 @@ title: Plotly Reporting
The [plotly_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/plotly_reporting.py) example
demonstrates ClearML's Plotly integration and reporting.
Report Plotly plots in ClearML by calling the [`Logger.report_plotly`](../../references/sdk/logger.md#report_plotly) method, and passing a complex
Report Plotly plots in ClearML by calling the [`Logger.report_plotly()`](../../references/sdk/logger.md#report_plotly) method, and passing a complex
Plotly figure, using the `figure` parameter.
In this example, the Plotly figure is created using `plotly.express.scatter` (see the [Plotly documentation](https://plotly.com/python/line-and-scatter/)):

View File

@ -27,7 +27,7 @@ Artifact details (location and size) can be viewed in ClearML's **web UI > exper
## Task 2: Accessing an Artifact
After the second task is initialized, the script uses the [`Task.get_task`](../../references/sdk/task.md#taskget_task)
After the second task is initialized, the script uses the [`Task.get_task()`](../../references/sdk/task.md#taskget_task)
class method to get the first task and access its artifacts, specifically the `data file` artifact. The `get_local_copy`
method downloads the files and returns a path.

View File

@ -152,7 +152,7 @@ Make sure a `clearml-agent` is assigned to that queue.
### Configuration
The values configured through the wizard are stored in the task's hyperparameters and configuration objects by using the
[`Task.connect`](../../references/sdk/task.md#connect) and [`Task.set_configuration_object`](../../references/sdk/task.md#set_configuration_object)
[`Task.connect()`](../../references/sdk/task.md#connect) and [`Task.set_configuration_object()`](../../references/sdk/task.md#set_configuration_object)
methods respectively. They can be viewed in the WebApp, in the task's **CONFIGURATION** page under **HYPERPARAMETERS** and **CONFIGURATION OBJECTS > General**.
ClearML automatically logs command line arguments defined with argparse. View them in the experiments **CONFIGURATION**

View File

@ -52,7 +52,7 @@ an `APIClient` object that establishes a session with the ClearML Server, and ac
* [`Task.delete`](../../references/sdk/task.md#delete) - Delete a Task.
## Configuration
The experiment's hyperparameters are explicitly logged to ClearML using the [`Task.connect`](../../references/sdk/task.md#connect)
The experiment's hyperparameters are explicitly logged to ClearML using the [`Task.connect()`](../../references/sdk/task.md#connect)
method. View them in the WebApp, in the experiment's **CONFIGURATION** page under **HYPERPARAMETERS > General**.
The task can be reused. Clone the task, edit its parameters, and enqueue the task to run in ClearML Agent [services mode](../../clearml_agent/clearml_agent_services_mode.md).

View File

@ -16,7 +16,7 @@ class. The storage examples include:
## Working with Files
### Downloading a File
To download a ZIP file from storage to the `global` cache context, use the [`StorageManager.get_local_copy`](../../references/sdk/storage.md#storagemanagerget_local_copy)
To download a ZIP file from storage to the `global` cache context, use the [`StorageManager.get_local_copy()`](../../references/sdk/storage.md#storagemanagerget_local_copy)
class method, and specify the destination location as the `remote_url` argument:
```python
@ -42,7 +42,7 @@ StorageManager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extr
```
By default, the `StorageManager` reports its download progress to the console every 5MB. You can change this using the
[`StorageManager.set_report_download_chunk_size`](../../references/sdk/storage.md#storagemanagerset_report_download_chunk_size)
[`StorageManager.set_report_download_chunk_size()`](../../references/sdk/storage.md#storagemanagerset_report_download_chunk_size)
class method, and specifying the chunk size in MB (not supported for Azure and GCP storage).
```python
@ -51,7 +51,7 @@ StorageManager.set_report_download_chunk_size(chunk_size_mb=10)
### Uploading a File
To upload a file to storage, use the [`StorageManager.upload_file`](../../references/sdk/storage.md#storagemanagerupload_file)
To upload a file to storage, use the [`StorageManager.upload_file()`](../../references/sdk/storage.md#storagemanagerupload_file)
class method. Specify the full path of the local file as the `local_file` argument, and the remote URL as the `remote_url`
argument.
@ -64,7 +64,7 @@ StorageManager.upload_file(
Use the `retries` parameter to set the number of times file upload should be retried in case of failure.
By default, the `StorageManager` reports its upload progress to the console every 5MB. You can change this using the
[`StorageManager.set_report_upload_chunk_size`](../../references/sdk/storage.md#storagemanagerset_report_upload_chunk_size)
[`StorageManager.set_report_upload_chunk_size()`](../../references/sdk/storage.md#storagemanagerset_report_upload_chunk_size)
class method, and specifying the chunk size in MB (not supported for Azure and GCP storage).
```python
@ -73,7 +73,7 @@ StorageManager.set_report_upload_chunk_size(chunk_size_mb=10)
## Working with Folders
### Downloading a Folder
Download a folder to a local machine using the [`StorageManager.download_folder`](../../references/sdk/storage.md#storagemanagerdownload_folder)
Download a folder to a local machine using the [`StorageManager.download_folder()`](../../references/sdk/storage.md#storagemanagerdownload_folder)
class method. Specify the remote storage location as the `remote_url` argument and the target local location as the
`local_folder` argument.
@ -90,7 +90,7 @@ For example: if you have a remote file `s3://bucket/sub/file.ext`, then
You can input `match_wildcard` so only files matching the wildcard are downloaded.
### Uploading a Folder
Upload a local folder to remote storage using the [`StorageManager.upload_folder`](../../references/sdk/storage.md#storagemanagerupload_folder)
Upload a local folder to remote storage using the [`StorageManager.upload_folder()`](../../references/sdk/storage.md#storagemanagerupload_folder)
class method. Specify the local folder to upload as the `local_folder` argument and the target remote location as the
`remote_url` argument.
@ -112,7 +112,7 @@ You can input `match_wildcard` so only files matching the wildcard are uploaded.
## Setting Cache Limits
To set a limit on the number of files cached, use the [`StorageManager.set_cache_file_limit`](../../references/sdk/storage.md#storagemanagerset_cache_file_limit)
To set a limit on the number of files cached, use the [`StorageManager.set_cache_file_limit()`](../../references/sdk/storage.md#storagemanagerset_cache_file_limit)
class method and specify the `cache_file_limit` argument as the maximum number of files. This does not limit the cache size,
only the number of files.

View File

@ -50,7 +50,7 @@ To add FrameGroups to a Dataset Version:
1. Append the FrameGroup object to a list of frames
1. Add that list to a DatasetVersion using the [`DatasetVersion.add_frames`](../references/hyperdataset/hyperdatasetversion.md#add_frames)
1. Add that list to a DatasetVersion using the [`DatasetVersion.add_frames()`](../references/hyperdataset/hyperdatasetversion.md#add_frames)
method. Use the `upload_retries` parameter to set the number of times the upload of a frame should be retried in case of
failure, before marking the frame as failed and continuing to upload the next frames. In the case that a single frame in
the FrameGroup fails to upload, the entire group will not be registered. The method returns a list of frames that were
@ -116,7 +116,7 @@ myVersion.update_frames(frames)
### Deleting Frames
To delete a FrameGroup, use the [`DatasetVersion.delete_frames`](../references/hyperdataset/hyperdatasetversion.md#delete_frames)
To delete a FrameGroup, use the [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames)
method, just like when deleting a SingleFrame, except that a FrameGroup is being referenced.
```python

View File

@ -46,7 +46,7 @@ For example:
```python
auto_connect_frameworks={
'fastai': False, 'catboost': True, 'tensorflow': False, 'tensorboard': False, 'pytorch': True,
'xgboost': False, 'scikit': True, 'lightgbm': False,
'xgboost': False, 'scikit': True, 'lightgbm': False,
'hydra': True, 'detect_repository': True, 'tfdefines': True, 'joblib': True,
'megengine': True
}

View File

@ -598,7 +598,7 @@ Administrators specify the total number of resources available in each pool. The
workload assignment up to the available number of resources.
Administrators control the execution priority within a pool across the resource profiles making use of it (e.g. if jobs
of profile A and jobs of profile B currently need to run in a pool, allocate resources for profile A jobs first or vice
of profile A and jobs of profile B currently need to run in a pool, allocate resources for profile A jobs first or vice
versa).
The resource pool cards are displayed on the top of the Resource Configuration settings page. Each card displays the