From 6bd34ba260cc14ad9ce090ce6fc2eb0b9bdd5da4 Mon Sep 17 00:00:00 2001 From: pollfly <75068813+pollfly@users.noreply.github.com> Date: Sun, 27 Aug 2023 10:23:06 +0300 Subject: [PATCH] Small edits (#658) --- docs/configs/clearml_conf.md | 2 +- docs/deploying_clearml/clearml_server_aws_ec2_ami.md | 2 +- docs/integrations/monai.md | 2 +- docs/pipelines/pipelines.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/configs/clearml_conf.md b/docs/configs/clearml_conf.md index 3442244e..f7758e85 100644 --- a/docs/configs/clearml_conf.md +++ b/docs/configs/clearml_conf.md @@ -151,7 +151,7 @@ Compatible with Docker versions 0.6.5 and above **`agent.docker_install_opencv_libs`** (*bool*) -* Install the required packages for opencv libraries (libsm6 libxext6 libxrender-dev libglib2.0-0), for backwards +* Install the required packages for opencv libraries (`libsm6 libxext6 libxrender-dev libglib2.0-0`), for backwards compatibility reasons. Change to `false` to skip installation and decrease docker spin-up time. --- diff --git a/docs/deploying_clearml/clearml_server_aws_ec2_ami.md b/docs/deploying_clearml/clearml_server_aws_ec2_ami.md index 6f7e2ca7..4969618e 100644 --- a/docs/deploying_clearml/clearml_server_aws_ec2_ami.md +++ b/docs/deploying_clearml/clearml_server_aws_ec2_ami.md @@ -24,7 +24,7 @@ By default, ClearML Server deploys as an open network. To restrict ClearML Serve in the [Security](clearml_server_security.md) page. ::: -The minimum recommended amount of RAM is 8 GB. For example, a t3.large or t3a.large EC2 instance type would accommodate the recommended RAM size. +The minimum recommended amount of RAM is 8 GB. For example, a `t3.large` or `t3a.large` EC2 instance type would accommodate the recommended RAM size. **To launch a ClearML Server AWS community AMI**, use one of the [ClearML Server AWS community AMIs](#clearml-server-aws-community-amis) and see: diff --git a/docs/integrations/monai.md b/docs/integrations/monai.md index 96b18d33..cd3009c6 100644 --- a/docs/integrations/monai.md +++ b/docs/integrations/monai.md @@ -14,7 +14,7 @@ and [`ModelCheckpoint`](#modelcheckpoint). ## ClearMLImageHandler and ClearMLStatsHandler Use the `ClearMLImageHandler` and the `ClearMLStatsHandler` to log images and metrics respectively to ClearML. -`ClearMLImageHandler` extends all functionality from [`TensorBoardImageHandler`](https://docs.monai.io/en/latest/handlers.html#monai.handlers.TensorBoardImageHandler, +`ClearMLImageHandler` extends all functionality from [`TensorBoardImageHandler`](https://docs.monai.io/en/latest/handlers.html#monai.handlers.TensorBoardImageHandler), used for visualizing images, labels, and outputs. `ClearMLStatsHandler` extends all functionality from [`TensorBoardStatsHandler`](https://docs.monai.io/en/latest/handlers.html#monai.handlers.TensorBoardStatsHandler), which is used to define a set of Ignite Event handlers for TensorBoard logic. ClearML automatically captures all TensorBoard outputs. diff --git a/docs/pipelines/pipelines.md b/docs/pipelines/pipelines.md index 8a8e2171..200cc6de 100644 --- a/docs/pipelines/pipelines.md +++ b/docs/pipelines/pipelines.md @@ -35,7 +35,7 @@ example of a pipeline with concurrent steps. ClearML supports multiple modes for pipeline execution: * **Remote Mode** (default) - In this mode, the pipeline controller logic is executed through a designated queue, and all the pipeline steps are launched remotely through their respective queues. Since each task is executed independently, - it can have control over its git repository (if needed), required python packages and specific container to be used. + it can have control over its git repository (if needed), required python packages, and the specific container to use. * **Local Mode** - In this mode, the pipeline is executed locally, and the steps are executed as sub-processes. Each subprocess uses the exact same Python environment as the main pipeline logic. * **Debugging Mode** (for PipelineDecorator) - In this mode, the entire pipeline is executed locally, with the pipeline