From 6a9f3adb4d6bdb5b4f87a81284a755e49ba55027 Mon Sep 17 00:00:00 2001
From: pollfly <75068813+pollfly@users.noreply.github.com>
Date: Wed, 13 Sep 2023 10:58:54 +0300
Subject: [PATCH] Small edits (#670)
---
docs/clearml_agent/clearml_agent_ref.md | 6 +++---
docs/deploying_clearml/clearml_server_gcp.md | 2 +-
docs/deploying_clearml/clearml_server_linux_mac.md | 2 +-
docs/faq.md | 7 +++----
.../experiment_management_best_practices.md | 2 +-
.../ml_ci_cd_using_github_actions_and_clearml.md | 2 +-
docs/guides/reporting/pandas_reporting.md | 6 +++---
docs/guides/reporting/plotly_reporting.md | 7 +++----
docs/hyperdatasets/single_frames.md | 13 +++++--------
9 files changed, 21 insertions(+), 26 deletions(-)
diff --git a/docs/clearml_agent/clearml_agent_ref.md b/docs/clearml_agent/clearml_agent_ref.md
index 2b498b1e..46960c71 100644
--- a/docs/clearml_agent/clearml_agent_ref.md
+++ b/docs/clearml_agent/clearml_agent_ref.md
@@ -46,7 +46,7 @@ clearml-agent build [-h] --id TASK_ID [--target TARGET]
|`--install-globally`| Install the required Python packages before creating the virtual environment. Use `agent.package_manager.system_site_packages` to control the installation of the system packages. When `--docker` is used, `--install-globally` is always true.|
|
|`--log-level`| SDK log level. The values are:
- `DEBUG`
- `INFO`
- `WARN`
- `WARNING`
- `ERROR`
- `CRITICAL`
|
|
|`--python-version`| Virtual environment Python version to use.|
|
-|`-O`| Compile optimized pyc code (see python documentation). Repeat for more optimization.|
|
+|`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|
|
|`--target`| The target folder for the virtual environment and source code that will be used at launch.|
|
## config
@@ -96,7 +96,7 @@ clearml-agent daemon [-h] [--foreground] [--queue QUEUES [QUEUES ...]] [--order-
|`--gpus`| If running in Docker mode (see the `--docker` option), specify the active GPUs for the Docker containers to use. These are the same GPUs set in the `NVIDIA_VISIBLE_DEVICES` environment variable. For example: - `--gpus 0`
- `--gpu 0,1,2`
- `--gpus all`
|
|
|`-h`, `--help`| Get help for this command.|
|
|`--log-level`| SDK log level. The values are:- `DEBUG`
- `INFO`
- `WARN`
- `WARNING`
- `ERROR`
- `CRITICAL`
|
|
-|`-O`| Compile optimized pyc code (see python documentation). Repeat for more optimization.|
|
+|`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|
|
|`--order-fairness`| Pull from each queue in a round-robin order, instead of priority order.|
|
|`--queue`| Specify the queues which the worker is listening to. The values can be any combination of:- One or more queue IDs
- One or more queue names
- `default` indicating the default queue
|
|
|`--services-mode`| Launch multiple long-term docker services. Spin multiple, simultaneous Tasks, each in its own Docker container, on the same machine. Each Task will be registered as a new node in the system, providing tracking and transparency capabilities. Start up and shutdown of each Docker is verified. Use in CPU mode (`--cpu-only`) only.
To limit the number of simultaneous tasks run in services mode, pass the maximum number immediately after the `--services-mode` option (e.g. `--services-mode 5`)|
|
@@ -137,7 +137,7 @@ clearml-agent execute [-h] --id TASK_ID [--log-file LOG_FILE] [--disable-monitor
|`-h`, `--help`| Get help for this command.|
|
|`--log-file`| The log file for Task execution output (stdout / stderr) to a text file.|
|
|`--log-level`| SDK log level. The values are:- `DEBUG`
- `INFO`
- `WARN`
- `WARNING`
- `ERROR`
- `CRITICAL`
|
|
-|`-O`| Compile optimized pyc code (see python documentation). Repeat for more optimization.|
|
+|`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|
|
|`--require-queue`| If the specified task is not queued, the execution will fail (used for 3rd party scheduler integration, e.g. K8s, SLURM, etc.)|
|
|`--standalone-mode`| Do not use any network connects, assume everything is pre-installed|
|
diff --git a/docs/deploying_clearml/clearml_server_gcp.md b/docs/deploying_clearml/clearml_server_gcp.md
index ecab6218..e2867443 100644
--- a/docs/deploying_clearml/clearml_server_gcp.md
+++ b/docs/deploying_clearml/clearml_server_gcp.md
@@ -59,7 +59,7 @@ No upload of the image file is required. Links to image files stored in Google S
1. Click **Create** to import the image. The process can take several minutes depending on the size of the boot disk image.
-For more information see [Import the image to your custom images list](https://cloud.google.com/compute/docs/import/import-existing-image#import_image) in the [Compute Engine Documentation](https://cloud.google.com/compute/docs).
+For more information see the [Compute Engine Documentation](https://cloud.google.com/compute/docs/import/import-existing-image#import_image).
## Launching
diff --git a/docs/deploying_clearml/clearml_server_linux_mac.md b/docs/deploying_clearml/clearml_server_linux_mac.md
index 764c2286..f4937e24 100644
--- a/docs/deploying_clearml/clearml_server_linux_mac.md
+++ b/docs/deploying_clearml/clearml_server_linux_mac.md
@@ -22,7 +22,7 @@ and delete all cookies under the ClearML Server URL.
For Linux users only:
-* Linux distribution must support Docker. For more information, see this [explanation](https://docs.docker.com/engine/install/) in the Docker documentation.
+* Linux distribution must support Docker. For more information, see the [Docker documentation](https://docs.docker.com/engine/install/).
* Be logged in as a user with `sudo` privileges.
* Use `bash` for all command-line instructions in this installation.
* The ports `8080`, `8081`, and `8008` must be available for the ClearML Server services.
diff --git a/docs/faq.md b/docs/faq.md
index 2e4db4cc..f4d5877b 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -1,4 +1,4 @@
----
+--
title: FAQ
---
@@ -236,8 +236,7 @@ To replace the URL of each model, execute the following commands:
sudo docker exec -it clearml-mongo /bin/bash
```
-1. Create the following script inside the Docker shell:
- as well as the URL protocol if you aren't using `s3`.
+1. Create the following script inside the Docker shell (as well as the URL protocol if you aren't using `s3`):
```bash
cat <> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
@@ -266,7 +265,7 @@ To fix this, the registered URL of each model needs to be replaced with its curr
sudo docker exec -it clearml-mongo /bin/bash
```
-1. Create the following script inside the Docker shell.
+1. Create the following script inside the Docker shell:
```bash
cat <> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
diff --git a/docs/getting_started/video_tutorials/experiment_management_best_practices.md b/docs/getting_started/video_tutorials/experiment_management_best_practices.md
index fcbbdae8..64767963 100644
--- a/docs/getting_started/video_tutorials/experiment_management_best_practices.md
+++ b/docs/getting_started/video_tutorials/experiment_management_best_practices.md
@@ -42,7 +42,7 @@ Remember ClearML also stores your code environment, making it reproducible. So w
Back to the overview. One of the output types you can add to your task is what’s called an artifact.
-An artifact can be a lot of things, mostly they’re files like model weights or pandas dataframes containing preprocessed features for example. Our documentation lists all supported data types.
+An artifact can be a lot of things, mostly they’re files like model weights or Pandas DataFrames containing preprocessed features for example. Our documentation lists all supported data types.
You can download the artifacts your code produced from the web UI to your local computer if you want to, but artifacts can also be retrieved programmatically.
diff --git a/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml.md b/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml.md
index 450f1aa2..db93143e 100644
--- a/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml.md
+++ b/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml.md
@@ -148,7 +148,7 @@ status, it isn't completed this should not happen but. If it is completed, we ar
functions that I won't go deeper into. Basically, they format the dictionary of the state of the task scalars into
markdown that we can actually use. Let me just go into this though one quick time. So we can basically do `Task.get_last_scalar_metrics()`,
and this function is built into ClearML, which basically gives you a dictionary with all the metrics on your task.
-We'll just get that formatted into a table, make it into a pandas DataFrame, and then tabulate it with this cool package
+We'll just get that formatted into a table, make it into a Pandas DataFrame, and then tabulate it with this cool package
that turns it into MarkDown. So now that we have marked down in the table, we then want to return results table. You can
view the full task. This is basically the comment content we want to be in the comment that will later end up in the PR.
If something else went wrong, we want to log it here. It will also end up in a comment, by the way, so then we know that
diff --git a/docs/guides/reporting/pandas_reporting.md b/docs/guides/reporting/pandas_reporting.md
index 49358a26..f7fa3787 100644
--- a/docs/guides/reporting/pandas_reporting.md
+++ b/docs/guides/reporting/pandas_reporting.md
@@ -4,15 +4,15 @@ title: Tables Reporting (Pandas and CSV Files)
The [pandas_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/pandas_reporting.py) example demonstrates reporting tabular data from Pandas DataFrames and CSV files as tables.
-ClearML reports these tables in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
+ClearML reports these tables, and displays them in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
tab.
When the script runs, it creates an experiment named `table reporting` in the `examples` project.
## Reporting Pandas DataFrames as Tables
-Report Pandas DataFrames by calling the [Logger.report_table](../../references/sdk/logger.md#report_table)
-method, and providing the DataFrame in the `table_plot` parameter.
+Report Pandas DataFrames by calling [`Logger.report_table()`](../../references/sdk/logger.md#report_table),
+and providing the DataFrame in the `table_plot` parameter.
```python
# Report table - DataFrame with index
diff --git a/docs/guides/reporting/plotly_reporting.md b/docs/guides/reporting/plotly_reporting.md
index ab7b26d9..6870df6c 100644
--- a/docs/guides/reporting/plotly_reporting.md
+++ b/docs/guides/reporting/plotly_reporting.md
@@ -8,8 +8,7 @@ demonstrates ClearML's Plotly integration and reporting.
Report Plotly plots in ClearML by calling the [`Logger.report_plotly`](../../references/sdk/logger.md#report_plotly) method, and passing a complex
Plotly figure, using the `figure` parameter.
-In this example, the Plotly figure is created using `plotly.express.scatter` (see [Scatter Plots in Python](https://plotly.com/python/line-and-scatter/)
-in the Plotly documentation):
+In this example, the Plotly figure is created using `plotly.express.scatter` (see the [Plotly documentation](https://plotly.com/python/line-and-scatter/)):
```python
# Iris dataset
@@ -33,7 +32,7 @@ task.get_logger().report_plotly(
When the script runs, it creates an experiment named `plotly reporting` in the examples project.
-ClearML reports Plotly plots in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
+ClearML reports Plotly figures, and displays them in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
tab.
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/hyperdatasets/single_frames.md b/docs/hyperdatasets/single_frames.md
index ccb91fee..86ef1c42 100644
--- a/docs/hyperdatasets/single_frames.md
+++ b/docs/hyperdatasets/single_frames.md
@@ -35,7 +35,7 @@ For more information, see [Annotations](annotations.md).
### Masks
-A `SingleFrame` can include a URI link to masks file if applicable. Masks correspond to raw data where the objects to be
+A `SingleFrame` can include a URI link to a mask file if applicable. Masks correspond to raw data where the objects to be
detected are marked with colors or different opacity levels in the masks.
For more information, see [Masks](masks.md).
@@ -238,7 +238,7 @@ For more information, see the [`SingleFrame`](../references/hyperdataset/singlef
### Adding SingleFrames to a Dataset Version
-Use the [`DatasetVersion.add_frames`](../references/hyperdataset/hyperdatasetversion.md#add_frames) method to add
+Use [`DatasetVersion.add_frames()`](../references/hyperdataset/hyperdatasetversion.md#add_frames) to add
SingleFrames to a [Dataset version](dataset.md#dataset-versioning) (see [Creating snapshots](dataset.md#creating-snapshots)
or [Creating child versions](dataset.md#creating-child-versions)). Frames that are already a part of the dataset version
will only be updated.
@@ -270,8 +270,7 @@ myDatasetversion.add_frames(frames)
### Accessing SingleFrames
-To access a SingleFrame, use the [`DatasetVersion.get_single_frame`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame)
-method.
+To access a SingleFrame, use [`DatasetVersion.get_single_frame()`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame).
```python
from allegroai import DatasetVersion
@@ -290,8 +289,7 @@ To access a SingleFrame, the following must be specified:
### Updating SingleFrames
To update a SingleFrame:
-* Access the SingleFrame by calling the [`DatasetVersion.get_single_frame`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame)
- method
+* Access the SingleFrame by calling [`DatasetVersion.get_single_frame()`](../references/hyperdataset/hyperdatasetversion.md#datasetversionget_single_frame)
* Make changes to the frame
* Update the frame in a DatasetVersion using the [`DatasetVersion.update_frames`](../references/hyperdataset/hyperdatasetversion.md#update_frames)
method.
@@ -327,8 +325,7 @@ myDatasetVersion.update_frames(frames)
### Deleting Frames
-To delete a SingleFrame, use the [`DatasetVersion.delete_frames`](../references/hyperdataset/hyperdatasetversion.md#delete_frames)
-method.
+To delete a SingleFrame, use [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames).
```python
frames = []