Small edits (#670)

This commit is contained in:
pollfly 2023-09-13 10:58:54 +03:00 committed by GitHub
parent a7954101b4
commit 6a9f3adb4d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 21 additions and 26 deletions

View File

@ -46,7 +46,7 @@ clearml-agent build [-h] --id TASK_ID [--target TARGET]
|`--install-globally`| Install the required Python packages before creating the virtual environment. Use `agent.package_manager.system_site_packages` to control the installation of the system packages. When `--docker` is used, `--install-globally` is always true.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--log-level`| SDK log level. The values are:<ul><li>`DEBUG`</li><li>`INFO`</li><li>`WARN`</li><li>`WARNING`</li><li>`ERROR`</li><li>`CRITICAL`</li></ul>|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--python-version`| Virtual environment Python version to use.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`-O`| Compile optimized pyc code (see python documentation). Repeat for more optimization.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--target`| The target folder for the virtual environment and source code that will be used at launch.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
## config
@ -96,7 +96,7 @@ clearml-agent daemon [-h] [--foreground] [--queue QUEUES [QUEUES ...]] [--order-
|`--gpus`| If running in Docker mode (see the `--docker` option), specify the active GPUs for the Docker containers to use. These are the same GPUs set in the `NVIDIA_VISIBLE_DEVICES` environment variable. For example: <ul><li>`--gpus 0`</li><li>`--gpu 0,1,2`</li><li>`--gpus all`</li></ul>|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`-h`, `--help`| Get help for this command.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--log-level`| SDK log level. The values are:<ul><li>`DEBUG`</li><li>`INFO`</li><li>`WARN`</li><li>`WARNING`</li><li>`ERROR`</li><li>`CRITICAL`</li></ul>|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`-O`| Compile optimized pyc code (see python documentation). Repeat for more optimization.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--order-fairness`| Pull from each queue in a round-robin order, instead of priority order.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--queue`| Specify the queues which the worker is listening to. The values can be any combination of:<ul><li>One or more queue IDs</li><li>One or more queue names</li><li>`default` indicating the default queue</li></ul>|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--services-mode`| Launch multiple long-term docker services. Spin multiple, simultaneous Tasks, each in its own Docker container, on the same machine. Each Task will be registered as a new node in the system, providing tracking and transparency capabilities. Start up and shutdown of each Docker is verified. Use in CPU mode (`--cpu-only`) only. <br/> To limit the number of simultaneous tasks run in services mode, pass the maximum number immediately after the `--services-mode` option (e.g. `--services-mode 5`)|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
@ -137,7 +137,7 @@ clearml-agent execute [-h] --id TASK_ID [--log-file LOG_FILE] [--disable-monitor
|`-h`, `--help`| Get help for this command.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--log-file`| The log file for Task execution output (stdout / stderr) to a text file.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--log-level`| SDK log level. The values are:<ul><li>`DEBUG`</li><li>`INFO`</li><li>`WARN`</li><li>`WARNING`</li><li>`ERROR`</li><li>`CRITICAL`</li></ul>|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`-O`| Compile optimized pyc code (see python documentation). Repeat for more optimization.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--require-queue`| If the specified task is not queued, the execution will fail (used for 3rd party scheduler integration, e.g. K8s, SLURM, etc.)|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--standalone-mode`| Do not use any network connects, assume everything is pre-installed|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|

View File

@ -59,7 +59,7 @@ No upload of the image file is required. Links to image files stored in Google S
1. Click **Create** to import the image. The process can take several minutes depending on the size of the boot disk image.
For more information see [Import the image to your custom images list](https://cloud.google.com/compute/docs/import/import-existing-image#import_image) in the [Compute Engine Documentation](https://cloud.google.com/compute/docs).
For more information see the [Compute Engine Documentation](https://cloud.google.com/compute/docs/import/import-existing-image#import_image).
## Launching

View File

@ -22,7 +22,7 @@ and delete all cookies under the ClearML Server URL.
For Linux users only:
* Linux distribution must support Docker. For more information, see this [explanation](https://docs.docker.com/engine/install/) in the Docker documentation.
* Linux distribution must support Docker. For more information, see the [Docker documentation](https://docs.docker.com/engine/install/).
* Be logged in as a user with `sudo` privileges.
* Use `bash` for all command-line instructions in this installation.
* The ports `8080`, `8081`, and `8008` must be available for the ClearML Server services.

View File

@ -1,4 +1,4 @@
---
--
title: FAQ
---
@ -236,8 +236,7 @@ To replace the URL of each model, execute the following commands:
sudo docker exec -it clearml-mongo /bin/bash
```
1. Create the following script inside the Docker shell:
as well as the URL protocol if you aren't using `s3`.
1. Create the following script inside the Docker shell (as well as the URL protocol if you aren't using `s3`):
```bash
cat <<EOT >> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
@ -266,7 +265,7 @@ To fix this, the registered URL of each model needs to be replaced with its curr
sudo docker exec -it clearml-mongo /bin/bash
```
1. Create the following script inside the Docker shell.
1. Create the following script inside the Docker shell:
```bash
cat <<EOT >> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {

View File

@ -42,7 +42,7 @@ Remember ClearML also stores your code environment, making it reproducible. So w
Back to the overview. One of the output types you can add to your task is whats called an artifact.
An artifact can be a lot of things, mostly theyre files like model weights or pandas dataframes containing preprocessed features for example. Our documentation lists all supported data types.
An artifact can be a lot of things, mostly theyre files like model weights or Pandas DataFrames containing preprocessed features for example. Our documentation lists all supported data types.
You can download the artifacts your code produced from the web UI to your local computer if you want to, but artifacts can also be retrieved programmatically.

View File

@ -148,7 +148,7 @@ status, it isn't completed this should not happen but. If it is completed, we ar
functions that I won't go deeper into. Basically, they format the dictionary of the state of the task scalars into
markdown that we can actually use. Let me just go into this though one quick time. So we can basically do `Task.get_last_scalar_metrics()`,
and this function is built into ClearML, which basically gives you a dictionary with all the metrics on your task.
We'll just get that formatted into a table, make it into a pandas DataFrame, and then tabulate it with this cool package
We'll just get that formatted into a table, make it into a Pandas DataFrame, and then tabulate it with this cool package
that turns it into MarkDown. So now that we have marked down in the table, we then want to return results table. You can
view the full task. This is basically the comment content we want to be in the comment that will later end up in the PR.
If something else went wrong, we want to log it here. It will also end up in a comment, by the way, so then we know that

View File

@ -4,15 +4,15 @@ title: Tables Reporting (Pandas and CSV Files)
The [pandas_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/pandas_reporting.py) example demonstrates reporting tabular data from Pandas DataFrames and CSV files as tables.
ClearML reports these tables in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
ClearML reports these tables, and displays them in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
tab.
When the script runs, it creates an experiment named `table reporting` in the `examples` project.
## Reporting Pandas DataFrames as Tables
Report Pandas DataFrames by calling the [Logger.report_table](../../references/sdk/logger.md#report_table)
method, and providing the DataFrame in the `table_plot` parameter.
Report Pandas DataFrames by calling [`Logger.report_table()`](../../references/sdk/logger.md#report_table),
and providing the DataFrame in the `table_plot` parameter.
```python
# Report table - DataFrame with index

View File

@ -8,8 +8,7 @@ demonstrates ClearML's Plotly integration and reporting.
Report Plotly plots in ClearML by calling the [`Logger.report_plotly`](../../references/sdk/logger.md#report_plotly) method, and passing a complex
Plotly figure, using the `figure` parameter.
In this example, the Plotly figure is created using `plotly.express.scatter` (see [Scatter Plots in Python](https://plotly.com/python/line-and-scatter/)
in the Plotly documentation):
In this example, the Plotly figure is created using `plotly.express.scatter` (see the [Plotly documentation](https://plotly.com/python/line-and-scatter/)):
```python
# Iris dataset
@ -33,7 +32,7 @@ task.get_logger().report_plotly(
When the script runs, it creates an experiment named `plotly reporting` in the examples project.
ClearML reports Plotly plots in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
ClearML reports Plotly figures, and displays them in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
tab.
![image](../../img/examples_reporting_13.png)
![Web UI experiment plots](../../img/examples_reporting_13.png)

View File

@ -35,7 +35,7 @@ For more information, see [Annotations](annotations.md).
### Masks
A `SingleFrame` can include a URI link to masks file if applicable. Masks correspond to raw data where the objects to be
A `SingleFrame` can include a URI link to a mask file if applicable. Masks correspond to raw data where the objects to be
detected are marked with colors or different opacity levels in the masks.
For more information, see [Masks](masks.md).
@ -238,7 +238,7 @@ For more information, see the [`SingleFrame`](../references/hyperdataset/singlef
### Adding SingleFrames to a Dataset Version
Use the [`DatasetVersion.add_frames`](../references/hyperdataset/hyperdatasetversion.md#add_frames) method to add
Use [`DatasetVersion.add_frames()`](../references/hyperdataset/hyperdatasetversion.md#add_frames) to add
SingleFrames to a [Dataset version](dataset.md#dataset-versioning) (see [Creating snapshots](dataset.md#creating-snapshots)
or [Creating child versions](dataset.md#creating-child-versions)). Frames that are already a part of the dataset version
will only be updated.
@ -270,8 +270,7 @@ myDatasetversion.add_frames(frames)
### Accessing SingleFrames
To access a SingleFrame, use the [`DatasetVersion.get_single_frame`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame)
method.
To access a SingleFrame, use [`DatasetVersion.get_single_frame()`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame).
```python
from allegroai import DatasetVersion
@ -290,8 +289,7 @@ To access a SingleFrame, the following must be specified:
### Updating SingleFrames
To update a SingleFrame:
* Access the SingleFrame by calling the [`DatasetVersion.get_single_frame`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame)
method
* Access the SingleFrame by calling [`DatasetVersion.get_single_frame()`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame)
* Make changes to the frame
* Update the frame in a DatasetVersion using the [`DatasetVersion.update_frames`](../references/hyperdataset/hyperdatasetversion.md#update_frames)
method.
@ -327,8 +325,7 @@ myDatasetVersion.update_frames(frames)
### Deleting Frames
To delete a SingleFrame, use the [`DatasetVersion.delete_frames`](../references/hyperdataset/hyperdatasetversion.md#delete_frames)
method.
To delete a SingleFrame, use [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames).
```python
frames = []