diff --git a/docs/clearml_agent.md b/docs/clearml_agent.md
index 6619cc7a..55af6e61 100644
--- a/docs/clearml_agent.md
+++ b/docs/clearml_agent.md
@@ -2,12 +2,12 @@
title: ClearML Agent
---
-**ClearML Agent** is a virtual environment and execution manager for DL / ML solutions on GPU machines. It integrates with the **ClearML Python Package** and **ClearML Server** to provide a full AI cluster solution.
+**ClearML Agent** is a virtual environment and execution manager for DL / ML solutions on GPU machines. It integrates with the **ClearML Python Package** and ClearML Server to provide a full AI cluster solution.
Its main focus is around:
- Reproducing experiments, including their complete environments.
- Scaling workflows on multiple target machines.
-**ClearML Agent** executes an experiment or other workflow by reproducing the state of the code from the original machine
+ClearML Agent executes an experiment or other workflow by reproducing the state of the code from the original machine
to a remote machine.

diff --git a/docs/deploying_clearml/clearml_server.md b/docs/deploying_clearml/clearml_server.md
index 419c33c6..79aff9e2 100644
--- a/docs/deploying_clearml/clearml_server.md
+++ b/docs/deploying_clearml/clearml_server.md
@@ -3,7 +3,7 @@ title: ClearML Server
---
## What is ClearML Server?
-The **ClearML Server** is the backend service infrastructure for ClearML. It allows multiple users to collaborate and
+The ClearML Server is the backend service infrastructure for ClearML. It allows multiple users to collaborate and
manage their experiments by working seamlessly with the ClearML Python package and [ClearML Agent](../clearml_agent.md).
ClearML Server is composed of the following:
diff --git a/docs/deploying_clearml/clearml_server_aws_ec2_ami.md b/docs/deploying_clearml/clearml_server_aws_ec2_ami.md
index 449fbb13..baf7eb93 100644
--- a/docs/deploying_clearml/clearml_server_aws_ec2_ami.md
+++ b/docs/deploying_clearml/clearml_server_aws_ec2_ami.md
@@ -2,7 +2,7 @@
title: AWS EC2 AMIs
---
-Deployment of **ClearML Server** on AWS is easily performed using AWS AMIs, which are available in the AWS community AMI catalog.
+Deployment of ClearML Server on AWS is easily performed using AWS AMIs, which are available in the AWS community AMI catalog.
The [ClearML Server community AMIs](#clearml-server-aws-community-amis) are configured by default without authentication
to allow quick access and onboarding.
@@ -12,7 +12,7 @@ best matches the workflow.
For information about upgrading a ClearML Server in an AWS instance, see [here](upgrade_server_aws_ec2_ami.md).
:::important
-If ClearML Server is being reinstalled, we recommend clearing browser cookies for ClearML Server. For example,
+If ClearML Server is being reinstalled, clearing browser cookies for ClearML Server is recommended. For example,
for Firefox, go to Developer Tools > Storage > Cookies, and for Chrome, go to Developer Tools > Application > Cookies,
and delete all cookies under the ClearML Server URL.
:::
@@ -20,7 +20,7 @@ and delete all cookies under the ClearML Server URL.
## Launching
:::warning
-By default, **ClearML Server** deploys as an open network. To restrict **ClearML Server** access, follow the instructions
+By default, ClearML Server deploys as an open network. To restrict ClearML Server access, follow the instructions
in the [Security](clearml_server_security.md) page.
:::
@@ -34,7 +34,7 @@ and see:
## Accessing ClearML Server
-Once deployed, **ClearML Server** exposes the following services:
+Once deployed, ClearML Server exposes the following services:
* Web server on `TCP port 8080`
* API server on `TCP port 8008`
diff --git a/docs/deploying_clearml/clearml_server_config.md b/docs/deploying_clearml/clearml_server_config.md
index ee57408a..ba43b76a 100644
--- a/docs/deploying_clearml/clearml_server_config.md
+++ b/docs/deploying_clearml/clearml_server_config.md
@@ -6,7 +6,7 @@ title: Configuring ClearML Server
This documentation page applies to deploying your own open source ClearML Server. It does not apply to ClearML Hosted Service users.
:::
-This page describes the **ClearML Server** [deployment](#clearml-server-deployment-configuration) and [feature](#clearml-server-feature-configurations) configurations. Namely, it contains instructions on how to configure **ClearML Server** for:
+This page describes the ClearML Server [deployment](#clearml-server-deployment-configuration) and [feature](#clearml-server-feature-configurations) configurations. Namely, it contains instructions on how to configure ClearML Server for:
* [Sub-domains and load balancers](#sub-domains-and-load-balancers) - An AWS load balancing example
* [Opening Elasticsearch, MongoDB, and Redis for External Access](#opening-elasticsearch-mongodb-and-redis-for-external-access)
@@ -18,12 +18,12 @@ This page describes the **ClearML Server** [deployment](#clearml-server-deployme
For all configuration options, see the [ClearML Configuration Reference](../configs/clearml_conf.md) page.
:::important
-We recommend using the latest version of **ClearML Server**.
+Using the latest version of ClearML Server is recommended.
:::
## ClearML Server Deployment Configuration
-**ClearML Server** supports two deployment configurations: single IP (domain) and sub-domains.
+ClearML Server supports two deployment configurations: single IP (domain) and sub-domains.
### Single IP (Domain) Configuration
@@ -41,8 +41,8 @@ Sub-domain configuration with default http/s ports (`80` or `443`):
* API service on sub-domain: `api.*.*`
* File storage service on sub-domain: `files.*.*`
-When [configuring sub-domains](#sub-domains-and-load-balancers) for **ClearML Server**, they will map to the **ClearML Server**'s
-internally configured ports for the Dockers. As a result, **ClearML Server** Dockers remain accessible if, for example,
+When [configuring sub-domains](#sub-domains-and-load-balancers) for ClearML Server, they will map to the ClearML Server's
+internally configured ports for the Dockers. As a result, ClearML Server Dockers remain accessible if, for example,
some type of port forwarding is implemented.
:::important
@@ -59,11 +59,11 @@ Accessing the **ClearML Web UI** with `app.clearml.mydomain.com` will automatica
## ClearML Server Feature Configurations
-**ClearML Server** features can be configured using either configuration files or environment variables.
+ClearML Server features can be configured using either configuration files or environment variables.
### Configuration Files
-The **ClearML Server** uses the following configuration files:
+The ClearML Server uses the following configuration files:
* `apiserver.conf`
* `hosts.conf`
@@ -71,7 +71,7 @@ The **ClearML Server** uses the following configuration files:
* `secure.conf`
* `services.conf`
-When starting up, the **ClearML Server** will look for these configuration files, in the `/opt/clearml/config` directory
+When starting up, the ClearML Server will look for these configuration files, in the `/opt/clearml/config` directory
(this path can be modified using the `CLEARML_CONFIG_DIR` environment variable).
The default configuration files are in the [clearml-server](https://github.com/allegroai/clearml-server/tree/master/apiserver/config/default) repository.
@@ -91,7 +91,7 @@ tasks {
### Environment Variables
-The **ClearML Server** supports several fixed environment variables that affect its behavior,
+The ClearML Server supports several fixed environment variables that affect its behavior,
as well as dynamic environment variable that can be used to override any configuration file setting.
#### Fixed Environment Variables
@@ -151,9 +151,9 @@ the default secret for the system's apiserver component can be overridden by set
### Sub-domains and Load Balancers
-To illustrate this configuration, we provide the following example based on AWS load balancing:
+The following example, which is based on AWS load balancing, demonstrates the configuration:
-1. In the **ClearML Server** `/opt/clearml/config/apiserver.conf` file, add the following `auth.cookies` section:
+1. In the ClearML Server `/opt/clearml/config/apiserver.conf` file, add the following `auth.cookies` section:
auth {
cookies {
@@ -186,13 +186,13 @@ To illustrate this configuration, we provide the following example based on AWS
* Instances: make sure the load balancers are able to access the instances, using the relevant ports (Security
groups definitions).
-1. Restart **ClearML Server**.
+1. Restart ClearML Server.
### Opening Elasticsearch, MongoDB, and Redis for External Access
-For improved security, the ports for **ClearML Server** Elasticsearch, MongoDB, and Redis servers are not exposed by default;
+For improved security, the ports for ClearML Server Elasticsearch, MongoDB, and Redis servers are not exposed by default;
they are only open internally in the docker network. If external access is needed, open these ports (but make sure to
understand the security risks involved with doing so).
@@ -204,7 +204,7 @@ opening ports for external access.
To open external access to the Elasticsearch, MongoDB, and Redis ports:
-1. Shutdown **ClearML Server**. Execute the following command (which assumes the configuration file is in the environment path).
+1. Shutdown ClearML Server. Execute the following command (which assumes the configuration file is in the environment path).
docker-compose down
@@ -225,7 +225,7 @@ To open external access to the Elasticsearch, MongoDB, and Redis ports:
ports:
- "6379:6379"
-1. Startup **ClearML Server**.
+1. Startup ClearML Server.
docker-compose -f docker-compose.yml pull
docker-compose -f docker-compose.yml up -d
@@ -234,14 +234,14 @@ To open external access to the Elasticsearch, MongoDB, and Redis ports:
### Web Login Authentication
-Web login authentication can be configured in the **ClearML Server** in order to permit only users provided
+Web login authentication can be configured in the ClearML Server in order to permit only users provided
with credentials to access the ClearML system. Those credentials are a username and password.
-Without web login authentication, **ClearML Server** does not restrict access (by default).
+Without web login authentication, ClearML Server does not restrict access (by default).
**To add web login authentication to the ClearML Server:**
-1. In **ClearML Server** `/opt/clearml/config/apiserver.conf`, add the `auth.fixed_users` section and specify the users.
+1. In ClearML Server `/opt/clearml/config/apiserver.conf`, add the `auth.fixed_users` section and specify the users.
For example:
@@ -266,7 +266,7 @@ Without web login authentication, **ClearML Server** does not restrict access (b
}
}
-1. Restart **ClearML Server**.
+1. Restart ClearML Server.
### Using Hashed Passwords
You can also use hashed passwords instead of plain-text passwords. To do that:
@@ -307,7 +307,7 @@ Modify the following settings for the watchdog:
**To configure the non-responsive watchdog for the ClearML Server:**
-1. In the **ClearML Server** `/opt/clearml/config/services.conf` file, add or edit the `tasks.non_responsive_tasks_watchdog`
+1. In the ClearML Server `/opt/clearml/config/services.conf` file, add or edit the `tasks.non_responsive_tasks_watchdog`
and specify the watchdog settings.
For example:
@@ -324,7 +324,7 @@ Modify the following settings for the watchdog:
}
}
-1. Restart **ClearML Server**.
+1. Restart ClearML Server.
### Custom UI Context Menu Actions
diff --git a/docs/deploying_clearml/clearml_server_es7_migration.md b/docs/deploying_clearml/clearml_server_es7_migration.md
index 6ff6208f..08d18db4 100644
--- a/docs/deploying_clearml/clearml_server_es7_migration.md
+++ b/docs/deploying_clearml/clearml_server_es7_migration.md
@@ -11,7 +11,7 @@ In v0.16, the Elasticsearch subsystem of **Trains Server** was upgraded from ver
the migration of the database contents to accommodate the change in index structure across the different versions.
This page provides the instructions to carry out the migration process. Follow this process if using **Trains Server**
-version 0.15 or older and are upgrading to **ClearML Server**.
+version 0.15 or older and are upgrading to ClearML Server.
The migration process makes use of a script that automatically performs the following:
@@ -24,7 +24,7 @@ The migration process makes use of a script that automatically performs the foll
:::warning
Once the migration process completes successfully, the data is no longer accessible to the older version of Trains Server,
-and **ClearML Server** needs to be installed.
+and ClearML Server needs to be installed.
:::
### Prerequisites
diff --git a/docs/deploying_clearml/clearml_server_gcp.md b/docs/deploying_clearml/clearml_server_gcp.md
index 872d7072..e2447e9b 100644
--- a/docs/deploying_clearml/clearml_server_gcp.md
+++ b/docs/deploying_clearml/clearml_server_gcp.md
@@ -2,23 +2,23 @@
title: Google Cloud Platform
---
-Deploy **ClearML Server** on the Google Cloud Platform (GCP) using one of the pre-built GCP Custom Images. ClearML
-provides custom images for each released version of **ClearML Server**. For a list of the pre-built custom images, see
+Deploy ClearML Server on the Google Cloud Platform (GCP) using one of the pre-built GCP Custom Images. ClearML
+provides custom images for each released version of ClearML Server. For a list of the pre-built custom images, see
[ClearML Server GCP Custom Image](#clearml-server-gcp-custom-image).
-After deploying **ClearML Server**, configure the **ClearML Python Package** for it, see [Configuring ClearML for ClearML Server](clearml_config_for_clearml_server.md).
+After deploying ClearML Server, configure the **ClearML Python Package** for it, see [Configuring ClearML for ClearML Server](clearml_config_for_clearml_server.md).
For information about upgrading ClearML server on GCP, see [here](upgrade_server_gcp.md).
:::important
-If **ClearML Server** is being reinstalled, we recommend clearing browser cookies for **ClearML Server**. For example,
+If ClearML Server is being reinstalled, clearing browser cookies for ClearML Server is recommended. For example,
for Firefox, go to Developer Tools > Storage > Cookies, and for Chrome, go to Developer Tools > Application > Cookies,
-and delete all cookies under the **ClearML Server** URL.
+and delete all cookies under the ClearML Server URL.
:::
## Default ClearML Server Service Ports
-After deploying **ClearML Server**, the services expose the following node ports:
+After deploying ClearML Server, the services expose the following node ports:
* Web server on `8080`
* API server on `8008`
@@ -34,11 +34,11 @@ The persistent storage configuration:
## Importing the Custom Image to your GCP account
-Before launching an instance using a **ClearML Server** GCP Custom Image, import the image to the custom images list.
+Before launching an instance using a ClearML Server GCP Custom Image, import the image to the custom images list.
:::note
-No upload of the image file is required. We provide links to image files stored in Google Storage.
+No upload of the image file is required. Links to image files stored in Google Storage are provided.
:::
@@ -49,7 +49,7 @@ No upload of the image file is required. We provide links to image files stored
1. In **Name**, specify a unique name for the image.
1. Optionally, specify an image family for the new image, or configure specific encryption settings for the image.
1. In the **Source** menu, select **Cloud Storage file**.
-1. Enter the **ClearML Server** image bucket path (see [ClearML Server GCP Custom Image](#clearml-server-gcp-custom-image)),
+1. Enter the ClearML Server image bucket path (see [ClearML Server GCP Custom Image](#clearml-server-gcp-custom-image)),
for example: `allegro-files/clearml-server/clearml-server.tar.gz`.
1. Click **Create** to import the image. The process can take several minutes depending on the size of the boot disk image.
@@ -60,13 +60,13 @@ For more information see [Import the image to your custom images list](https://c
:::warning
-By default, **ClearML Server** launches with unrestricted access. To restrict **ClearML Server** access, follow the
+By default, ClearML Server launches with unrestricted access. To restrict ClearML Server access, follow the
instructions in the [Security](clearml_server_security.md) page.
:::
-To launch **ClearML Server** using a GCP Custom Image, see the [Manually importing virtual disks](https://cloud.google.com/compute/docs/import/import-existing-image#overview) in the "Google Cloud Storage" documentation, [Compute Engine documentation](https://cloud.google.com/compute/docs). For more information on Custom Images, see [Custom Images](https://cloud.google.com/compute/docs/images#custom_images) in the "Compute Engine documentation".
+To launch ClearML Server using a GCP Custom Image, see the [Manually importing virtual disks](https://cloud.google.com/compute/docs/import/import-existing-image#overview) in the "Google Cloud Storage" documentation, [Compute Engine documentation](https://cloud.google.com/compute/docs). For more information on Custom Images, see [Custom Images](https://cloud.google.com/compute/docs/images#custom_images) in the "Compute Engine documentation".
-The minimum requirements for **ClearML Server** are:
+The minimum requirements for ClearML Server are:
* 2 vCPUs
* 7.5GB RAM
@@ -106,7 +106,7 @@ If the data and the configuration need to be restored:
## ClearML Server GCP Custom Image
-The following section contains a list of Custom Image URLs (exported in different formats) for each released **ClearML Server** version.
+The following section contains a list of Custom Image URLs (exported in different formats) for each released ClearML Server version.
### Latest Version - v1.3.1
diff --git a/docs/deploying_clearml/clearml_server_kubernetes_helm.md b/docs/deploying_clearml/clearml_server_kubernetes_helm.md
index 799bd2fd..bced9e80 100644
--- a/docs/deploying_clearml/clearml_server_kubernetes_helm.md
+++ b/docs/deploying_clearml/clearml_server_kubernetes_helm.md
@@ -5,7 +5,7 @@ title: Kubernetes
To upgrade an existing ClearML Server Kubernetes deployment, see [here](upgrade_server_kubernetes_helm.md).
:::info
-If ClearML Server is being reinstalled, we recommend clearing browser cookies for ClearML Server. For example,
+If ClearML Server is being reinstalled, clearing browser cookies for ClearML Server is recommended. For example,
for Firefox, go to Developer Tools > Storage > Cookies, and for Chrome, go to Developer Tools > Application > Cookies,
and delete all cookies under the ClearML Server URL.
:::
@@ -13,7 +13,7 @@ and delete all cookies under the ClearML Server URL.
## Prerequisites
* Set up a Kubernetes cluster - For setting up Kubernetes on various platforms refer to the Kubernetes [getting started guide](https://kubernetes.io/docs/setup).
-* Set up a single node LOCAL Kubernetes on laptop / desktop - For setting up Kubernetes on your laptop/desktop, we suggest [kind](https://kind.sigs.k8s.io).
+* Set up a single node LOCAL Kubernetes on laptop / desktop - For setting up Kubernetes on your laptop/desktop, [kind](https://kind.sigs.k8s.io) is recommended.
* Install `helm` - Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.
To install Helm, refer to the [Helm installation guide](https://helm.sh/docs/using_helm.html#installing-helm) in the Helm documentation.
Ensure that the `helm` binary is in the PATH of your shell.
diff --git a/docs/deploying_clearml/clearml_server_linux_mac.md b/docs/deploying_clearml/clearml_server_linux_mac.md
index 08d101c9..c97fe116 100644
--- a/docs/deploying_clearml/clearml_server_linux_mac.md
+++ b/docs/deploying_clearml/clearml_server_linux_mac.md
@@ -2,7 +2,7 @@
title: Linux and macOS
---
-Deploy the **ClearML Server** in Linux or macOS using the pre-built Docker image.
+Deploy the ClearML Server in Linux or macOS using the pre-built Docker image.
For ClearML docker images, including previous versions, see [https://hub.docker.com/r/allegroai/clearml](https://hub.docker.com/r/allegroai/clearml).
However, pulling the ClearML Docker image directly is not required. We provide a docker-compose YAML file that does this.
@@ -11,7 +11,7 @@ The docker-compose file is included in the instructions on this page.
For information about upgrading ClearML Server in Linux or macOS, see [here](upgrade_server_linux_mac.md)
:::important
-If ClearML Server is being reinstalled, we recommend clearing browser cookies for ClearML Server. For example,
+If ClearML Server is being reinstalled, clearing browser cookies for ClearML Server is recommended. For example,
for Firefox, go to Developer Tools > Storage > Cookies, and for Chrome, go to Developer Tools > Application > Cookies,
and delete all cookies under the ClearML Server URL.
:::
diff --git a/docs/deploying_clearml/clearml_server_security.md b/docs/deploying_clearml/clearml_server_security.md
index 261a7697..419a61e8 100644
--- a/docs/deploying_clearml/clearml_server_security.md
+++ b/docs/deploying_clearml/clearml_server_security.md
@@ -6,37 +6,37 @@ title: Securing ClearML Server
This documentation page applies to deploying your own open source ClearML Server. It does not apply to ClearML Hosted Service users.
:::
-To ensure deployment is properly secure, we recommend you follow the following best practices.
+To ensure deployment is properly secure, follow the following best practices.
## Network Security
If the deployment is in an open network that allows public access, only allow access to the specific ports used by
-**ClearML Server** (see [ClearML Server configurations](clearml_server_config.md#clearml-server-deployment-configuration)).
+ClearML Server (see [ClearML Server configurations](clearml_server_config.md#clearml-server-deployment-configuration)).
If HTTPS access is configured for the instance, allow access to port `443`.
-For improved security, the ports for **ClearML Server** Elasticsearch, MongoDB, and Redis servers are not exposed by
+For improved security, the ports for ClearML Server Elasticsearch, MongoDB, and Redis servers are not exposed by
default; they are only open internally in the docker network.
## User Access Security
-Configure **ClearML Server** to use Web Login authentication, which requires a username and password for user access
+Configure ClearML Server to use Web Login authentication, which requires a username and password for user access
(see [Web Login Authentication](clearml_server_config.md#web-login-authentication)).
## File Server Security
By default, the File Server is not secured even if [Web Login Authentication](clearml_server_config.md#web-login-authentication)
-has been configured. We recommend using an [object storage solution](../integrations/storage.md) that has built-in security.
+has been configured. Using an [object storage solution](../integrations/storage.md) that has built-in security is recommended.
## Server Credentials and Secrets
-By default, **ClearML Server** comes with default values that are designed to allow to set it up quickly and to start working
+By default, ClearML Server comes with default values that are designed to allow to set it up quickly and to start working
with the ClearML SDK.
However, this also means that the **server must be secured** by either preventing any external access, or by changing
defaults so that the server's credentials are not publicly known.
-The **ClearML Server** default secrets can be found [here](https://github.com/allegroai/clearml-server/blob/master/apiserver/config/default/secure.conf), and can be changed using the `secure.conf` configuration file or using environment variables
+The ClearML Server default secrets can be found [here](https://github.com/allegroai/clearml-server/blob/master/apiserver/config/default/secure.conf), and can be changed using the `secure.conf` configuration file or using environment variables
(see [ClearML Server Feature Configurations](clearml_server_config.md#clearml-server-feature-configurations)).
Specifically, the relevant settings are:
diff --git a/docs/deploying_clearml/clearml_server_win.md b/docs/deploying_clearml/clearml_server_win.md
index 7b5fb253..67d75b40 100644
--- a/docs/deploying_clearml/clearml_server_win.md
+++ b/docs/deploying_clearml/clearml_server_win.md
@@ -2,21 +2,21 @@
title: Windows 10
---
-For Windows, we recommend launching the pre-built Docker image on a Linux virtual machine (see [Deploying ClearML Server: Linux or macOS](clearml_server_linux_mac.md)).
-However, **ClearML Server** can be launched on Windows 10, using Docker Desktop for Windows (see the Docker [System Requirements](https://docs.docker.com/docker-for-windows/install/#system-requirements)).
+For Windows, launching the pre-built Docker image on a Linux virtual machine is recommended (see [Deploying ClearML Server: Linux or macOS](clearml_server_linux_mac.md)).
+However, ClearML Server can be launched on Windows 10, using Docker Desktop for Windows (see the Docker [System Requirements](https://docs.docker.com/docker-for-windows/install/#system-requirements)).
-For information about upgrading **ClearML Server** on Windows, see [here](upgrade_server_win.md).
+For information about upgrading ClearML Server on Windows, see [here](upgrade_server_win.md).
:::important
-If **ClearML Server** is being reinstalled, we recommend clearing browser cookies for **ClearML Server**. For example,
+If ClearML Server is being reinstalled, clearing browser cookies for ClearML Server is recommended. For example,
for Firefox, go to Developer Tools > Storage > Cookies, and for Chrome, go to Developer Tools > Application > Cookies,
-and delete all cookies under the **ClearML Server** URL.
+and delete all cookies under the ClearML Server URL.
:::
## Deploying
:::warning
-By default, **ClearML Server** launches with unrestricted access. To restrict **ClearML Server** access, follow the instructions in the [Security](clearml_server_security.md) page.
+By default, ClearML Server launches with unrestricted access. To restrict ClearML Server access, follow the instructions in the [Security](clearml_server_security.md) page.
:::
:::info Memory Requirement
@@ -38,7 +38,7 @@ Deploying the server requires a minimum of 4 GB of memory, 8 GB is recommended.
1. Click **Apply**.
-1. Remove any previous installation of **ClearML Server**.
+1. Remove any previous installation of ClearML Server.
**This clears all existing ClearML SDK databases.**
@@ -50,7 +50,7 @@ Deploying the server requires a minimum of 4 GB of memory, 8 GB is recommended.
mkdir c:\opt\clearml\data
mkdir c:\opt\clearml\logs
-1. Save the **ClearML Server** docker-compose YAML file.
+1. Save the ClearML Server docker-compose YAML file.
curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose-win10.yml -o c:\opt\clearml\docker-compose-win10.yml
@@ -62,7 +62,7 @@ Deploying the server requires a minimum of 4 GB of memory, 8 GB is recommended.
## Port Mapping
-After deploying **ClearML Server**, the services expose the following node ports:
+After deploying ClearML Server, the services expose the following node ports:
* Web server on port `8080`
* API server on port `8008`
diff --git a/docs/deploying_clearml/upgrade_server_aws_ec2_ami.md b/docs/deploying_clearml/upgrade_server_aws_ec2_ami.md
index 3b4ef4da..ccbf6d0b 100644
--- a/docs/deploying_clearml/upgrade_server_aws_ec2_ami.md
+++ b/docs/deploying_clearml/upgrade_server_aws_ec2_ami.md
@@ -6,12 +6,12 @@ title: AWS EC2 AMIs
For upgrade purposes, the terms **Trains Server** and **ClearML Server** are interchangeable.
:::
-The sections below contain the steps to upgrade **ClearML Server** on the [same AWS instance](#upgrading-on-the-same-aws-instance), and
+The sections below contain the steps to upgrade ClearML Server on the [same AWS instance](#upgrading-on-the-same-aws-instance), and
to upgrade and migrate to a [new AWS instance](#upgrading-and-migrating-to-a-new-aws-instance).
### Upgrading on the Same AWS Instance
-This section contains the steps to upgrade **ClearML Server** on the same AWS instance.
+This section contains the steps to upgrade ClearML Server on the same AWS instance.
:::warning
Some legacy **Trains Server** AMIs provided an auto-upgrade on restart capability. This functionality is now deprecated.
@@ -19,7 +19,7 @@ Some legacy **Trains Server** AMIs provided an auto-upgrade on restart capabilit
**To upgrade your ClearML Server AWS AMI:**
-1. Shutdown the **ClearML Server** executing the following command (which assumes the configuration file is in the environment path).
+1. Shutdown the ClearML Server executing the following command (which assumes the configuration file is in the environment path).
docker-compose -f /opt/clearml/docker-compose.yml down
@@ -27,8 +27,8 @@ Some legacy **Trains Server** AMIs provided an auto-upgrade on restart capabilit
docker-compose -f /opt/trains/docker-compose.yml down
-1. We recommend [backing up your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration) and,
- if your configuration folder is not empty, backing up your configuration.
+1. [Backing up your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration) is recommended,
+ and if your configuration folder is not empty, backing up your configuration.
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
If upgrading from Trains Server version 0.15 or older, a data migration is required before continuing this upgrade. See instructions [here](clearml_server_es7_migration.md).
@@ -39,18 +39,18 @@ If upgrading from Trains Server version 0.15 or older, a data migration is requi
sudo curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
-1. Startup **ClearML Server**. This automatically pulls the latest **ClearML Server** build.
+1. Startup ClearML Server. This automatically pulls the latest ClearML Server build.
docker-compose -f /opt/clearml/docker-compose.yml pull
docker-compose -f docker-compose.yml up -d
### Upgrading and Migrating to a New AWS Instance
-This section contains the steps to upgrade **ClearML Server** on the new AWS instance.
+This section contains the steps to upgrade ClearML Server on the new AWS instance.
**To migrate and to upgrade your ClearML Server AWS AMI:**
-1. Shutdown **ClearML Server**. Executing the following command (which assumes the configuration file is in the environment path).
+1. Shutdown ClearML Server. Executing the following command (which assumes the configuration file is in the environment path).
docker-compose down
@@ -63,7 +63,7 @@ This section contains the steps to upgrade **ClearML Server** on the new AWS ins
1. On the new AWS instance, [restore your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration) and, if the configuration folder is not empty, restore the
configuration.
-1. Startup **ClearML Server**. This automatically pulls the latest **ClearML Server** build.
+1. Startup ClearML Server. This automatically pulls the latest ClearML Server build.
docker-compose -f docker-compose.yml pull
docker-compose -f docker-compose.yml up -d
diff --git a/docs/deploying_clearml/upgrade_server_gcp.md b/docs/deploying_clearml/upgrade_server_gcp.md
index e4f4f1ac..7ddeaffe 100644
--- a/docs/deploying_clearml/upgrade_server_gcp.md
+++ b/docs/deploying_clearml/upgrade_server_gcp.md
@@ -18,14 +18,14 @@ title: Google Cloud Platform
sudo mv /opt/trains /opt/clearml
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
-1. We recommend [backing up data](clearml_server_gcp.md#backing-up-and-restoring-data-and-configuration) and, if the configuration folder is
+1. [Backing up data](clearml_server_gcp.md#backing-up-and-restoring-data-and-configuration) is recommended, and if the configuration folder is
not empty, backing up the configuration.
1. Download the latest `docker-compose.yml` file.
curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
-1. Startup **ClearML Server**. This automatically pulls the latest **ClearML Server** build.
+1. Startup ClearML Server. This automatically pulls the latest ClearML Server build.
docker-compose -f /opt/clearml/docker-compose.yml pull
docker-compose -f /opt/clearml/docker-compose.yml up -d
diff --git a/docs/deploying_clearml/upgrade_server_kubernetes_helm.md b/docs/deploying_clearml/upgrade_server_kubernetes_helm.md
index b8a36448..cdb140e4 100644
--- a/docs/deploying_clearml/upgrade_server_kubernetes_helm.md
+++ b/docs/deploying_clearml/upgrade_server_kubernetes_helm.md
@@ -20,5 +20,6 @@ See the [clearml-helm-charts repository](https://github.com/allegroai/clearml-he
to view the up-to-date charts.
:::tip
-When changing values, make sure to set the chart version (`--version`) to avoid a chart update. We recommend keeping separate procedures between version and value updates to separate potential concerns.
+When changing values, make sure to set the chart version (`--version`) to avoid a chart update. Keeping separate procedures
+between version and value updates is recommended to separate potential concerns.
:::
diff --git a/docs/deploying_clearml/upgrade_server_linux_mac.md b/docs/deploying_clearml/upgrade_server_linux_mac.md
index b7528647..a208ed58 100644
--- a/docs/deploying_clearml/upgrade_server_linux_mac.md
+++ b/docs/deploying_clearml/upgrade_server_linux_mac.md
@@ -9,7 +9,7 @@ title: Linux or macOS
For Linux only, if upgrading from Trains Server v0.14 or older, configure the ClearML Agent Services.
- * If ``CLEARML_HOST_IP`` is not provided, then **ClearML Agent Services** uses the external public address of the **ClearML Server**.
+ * If ``CLEARML_HOST_IP`` is not provided, then **ClearML Agent Services** uses the external public address of the ClearML Server.
* If ``CLEARML_AGENT_GIT_USER`` / ``CLEARML_AGENT_GIT_PASS`` are not provided, then **ClearML Agent Services** can't access any private repositories for running service tasks.
@@ -37,7 +37,7 @@ For backwards compatibility, the environment variables ``TRAINS_HOST_IP``, ``TRA
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
-1. We recommend [backing up data](clearml_server_linux_mac.md#backing-up-and-restoring-data-and-configuration) and, if the configuration folder is
+1. [Backing up data](clearml_server_linux_mac.md#backing-up-and-restoring-data-and-configuration) is recommended and, if the configuration folder is
not empty, backing up the configuration.
1. If upgrading from **Trains Server** to **ClearML Server**, rename `/opt/trains` and its subdirectories to `/opt/clearml`.
@@ -48,7 +48,7 @@ For backwards compatibility, the environment variables ``TRAINS_HOST_IP``, ``TRA
curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
-1. Startup **ClearML Server**. This automatically pulls the latest **ClearML Server** build.
+1. Startup ClearML Server. This automatically pulls the latest ClearML Server build.
docker-compose -f /opt/clearml/docker-compose.yml pull
docker-compose -f /opt/clearml/docker-compose.yml up -d
diff --git a/docs/deploying_clearml/upgrade_server_win.md b/docs/deploying_clearml/upgrade_server_win.md
index 8c1f1152..ae439293 100644
--- a/docs/deploying_clearml/upgrade_server_win.md
+++ b/docs/deploying_clearml/upgrade_server_win.md
@@ -8,7 +8,7 @@ title: Windows
1. Execute one of the following commands, depending upon the version that is being upgraded:
- * Upgrading **ClearML Server** version:
+ * Upgrading ClearML Server version:
docker-compose -f c:\opt\clearml\docker-compose-win10.yml down
@@ -20,7 +20,7 @@ title: Windows
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
-1. We recommend backing up data and, if the configuration folder is not empty, backing up the configuration.
+1. Backing up data is recommended, and if the configuration folder is not empty, backing up the configuration.
:::note
For example, if the configuration is in ``c:\opt\clearml``, then backup ``c:\opt\clearml\config`` and ``c:\opt\clearml\data``.
@@ -33,7 +33,7 @@ title: Windows
curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose-win10.yml -o c:\opt\clearml\docker-compose-win10.yml
-1. Startup **ClearML Server**. This automatically pulls the latest **ClearML Server** build.
+1. Startup ClearML Server. This automatically pulls the latest ClearML Server build.
docker-compose -f c:\opt\clearml\docker-compose-win10.yml pull
docker-compose -f c:\opt\clearml\docker-compose-win10.yml up -d
diff --git a/docs/faq.md b/docs/faq.md
index 7194fbd2..a398e59a 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -507,10 +507,10 @@ See [Storing Task Data Offline](guides/set_offline.md).
**The first log lines are missing from the experiment console tab. Where did they go?**
-Due to speed/optimization issues, we opted to display only the last several hundred log lines.
+Due to speed/optimization issues, the console displays only the last several hundred log lines.
You can always download the full log as a file using the ClearML Web UI. In the ClearML Web UI > experiment
-info panel > RESULTS tab > CONSOLE sub-tab, use the *Download full log* feature.
+info panel > CONSOLE tab, use the *Download full log* feature.
@@ -636,7 +636,7 @@ see [ClearML Configuration Reference](configs/clearml_conf.md).
**When using PyCharm to remotely debug a machine, the Git repo is not detected. Do you have a solution?**
-Yes! Since this is such a common occurrence, we created a PyCharm plugin that allows a remote debugger to grab your local
+Yes! ClearML provides a PyCharm plugin that allows a remote debugger to grab your local
repository / commit ID. For detailed information about using the plugin, see the [ClearML PyCharm Plugin](guides/ide/integration_pycharm.md).
@@ -900,7 +900,7 @@ For detailed instructions, see [Modifying non-responsive Task watchdog settings]
**I did a reinstall. Why can't I create credentials in the Web-App (UI)?**
-The issue is likely your browser cookies for ClearML Server. We recommend clearing your browser cookies for ClearML Server.
+The issue is likely your browser cookies for ClearML Server. Clearing your browser cookies for ClearML Server is recommended.
For example:
* For Firefox - go to Developer Tools > Storage > Cookies > delete all cookies under the ClearML Server URL.
* For Chrome - Developer Tools > Application > Cookies > delete all cookies under the ClearML Server URL.
diff --git a/docs/guides/automation/task_piping.md b/docs/guides/automation/task_piping.md
index 78327da4..3da8fe4e 100644
--- a/docs/guides/automation/task_piping.md
+++ b/docs/guides/automation/task_piping.md
@@ -10,7 +10,7 @@ example demonstrates:
This example accomplishes a task pipe by doing the following:
-1. Creating the template Task which is named `Toy Base Task`. It must be stored in **ClearML Server** before instances of
+1. Creating the template Task which is named `Toy Base Task`. It must be stored in ClearML Server before instances of
it can be created. To create it, run another ClearML example script, [toy_base_task.py](https://github.com/allegroai/clearml/blob/master/examples/automation/toy_base_task.py).
The template Task has a parameter dictionary, which is connected to the Task: `{'Example_Param': 1}`.
1. Back in `programmatic_orchestration.py`, creating a parameter dictionary, which is connected to the Task by calling [Task.connect](../../references/sdk/task.md#connect)
diff --git a/docs/guides/frameworks/autokeras/integration_autokeras.md b/docs/guides/frameworks/autokeras/integration_autokeras.md
index ffdad367..c4d2304d 100644
--- a/docs/guides/frameworks/autokeras/integration_autokeras.md
+++ b/docs/guides/frameworks/autokeras/integration_autokeras.md
@@ -33,7 +33,7 @@ from clearml import Task
task = Task.init(project_name="myProject", task_name="myExperiment")
```
-When the code runs, it initializes a Task in **ClearML Server**. A hyperlink to the experiment's log is output to the console.
+When the code runs, it initializes a Task in ClearML Server. A hyperlink to the experiment's log is output to the console.
CLEARML Task: created new task id=c1f1dc6cf2ee4ec88cd1f6184344ca4e
CLEARML results page: https://app.clear.ml/projects/1c7a45633c554b8294fa6dcc3b1f2d4d/experiments/c1f1dc6cf2ee4ec88cd1f6184344ca4e/output/log
diff --git a/docs/guides/frameworks/pytorch/notebooks/table/tabular_training_pipeline.md b/docs/guides/frameworks/pytorch/notebooks/table/tabular_training_pipeline.md
index 39370124..2c2770fa 100644
--- a/docs/guides/frameworks/pytorch/notebooks/table/tabular_training_pipeline.md
+++ b/docs/guides/frameworks/pytorch/notebooks/table/tabular_training_pipeline.md
@@ -269,7 +269,7 @@ By hovering over a step or path between nodes, you can view information about it
1. Run the pipeline controller one of the following two ways:
* Run the notebook [tabular_ml_pipeline.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/tabular_ml_pipeline.ipynb).
- * Remotely execute the Task - If the Task `tabular training pipeline` which is associated with the project `Tabular Example` already exists in **ClearML Server**, clone it and enqueue it to execute.
+ * Remotely execute the Task - If the Task `tabular training pipeline` which is associated with the project `Tabular Example` already exists in ClearML Server, clone it and enqueue it to execute.
:::note
diff --git a/docs/guides/frameworks/pytorch/pytorch_distributed_example.md b/docs/guides/frameworks/pytorch/pytorch_distributed_example.md
index 5cd6fee9..1d4bdd9f 100644
--- a/docs/guides/frameworks/pytorch/pytorch_distributed_example.md
+++ b/docs/guides/frameworks/pytorch/pytorch_distributed_example.md
@@ -35,7 +35,9 @@ All of these artifacts appear in the main Task, **ARTIFACTS** **>** **OTHER**.
## Scalars
-We report loss to the main Task by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method on `Task.current_task().get_logger`, which is the logger for the main Task. Since we call `Logger.report_scalar` with the same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are logged together.
+Report loss to the main Task by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method
+on `Task.current_task().get_logger`, which is the logger for the main Task. Since `Logger.report_scalar` is called with the
+same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are logged together.
Task.current_task().get_logger().report_scalar(
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i)
diff --git a/docs/guides/ide/google_colab.md b/docs/guides/ide/google_colab.md
index 7be17b45..aa1c503a 100644
--- a/docs/guides/ide/google_colab.md
+++ b/docs/guides/ide/google_colab.md
@@ -7,7 +7,7 @@ compute provided by google.
Users can transform a Google Colab instance into an available resource in ClearML using [ClearML Agent](../../clearml_agent.md).
-In this tutorial, we will go over how to create a ClearML worker node in a Google Colab notebook. Once the worker is up
+This tutorial goes over how to create a ClearML worker node in a Google Colab notebook. Once the worker is up
and running, users can send Tasks to be executed on the Google Colab's HW.
## Prerequisites
diff --git a/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md b/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md
index 7e0d97b1..5df31757 100644
--- a/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md
+++ b/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md
@@ -68,7 +68,7 @@ def job_complete_callback(
## Initialize the Optimization Task
-Initialize the Task, which will be stored in **ClearML Server** when the code runs. After the code runs at least once, it
+Initialize the Task, which will be stored in ClearML Server when the code runs. After the code runs at least once, it
can be [reproduced](../../../webapp/webapp_exp_reproducing.md) and [tuned](../../../webapp/webapp_exp_tuning.md).
We set the Task type to optimizer, and create a new experiment (and Task object) each time the optimizer runs (`reuse_last_task_id=False`).
@@ -92,7 +92,7 @@ Create an arguments dictionary that contains the ID of the Task to optimize, and
optimizer will run as a service, see [Running as a service](#running-as-a-service).
In this example, an experiment named **Keras HP optimization base** is being optimized. The experiment must have run at
-least once so that it is stored in **ClearML Server**, and, therefore, can be cloned.
+least once so that it is stored in ClearML Server, and, therefore, can be cloned.
Since the arguments dictionary is connected to the Task, after the code runs once, the `template_task_id` can be changed
to optimize a different experiment.
diff --git a/docs/guides/reporting/explicit_reporting.md b/docs/guides/reporting/explicit_reporting.md
index d85bd3a4..aa9bfddc 100644
--- a/docs/guides/reporting/explicit_reporting.md
+++ b/docs/guides/reporting/explicit_reporting.md
@@ -9,7 +9,7 @@ example script from ClearML's GitHub repo:
* Setting an output destination for model checkpoints (snapshots).
* Explicitly logging a scalar, other (non-scalar) data, and logging text.
-* Registering an artifact, which is uploaded to **ClearML Server**, and ClearML logs changes to it.
+* Registering an artifact, which is uploaded to [ClearML Server](../../deploying_clearml/clearml_server.md), and ClearML logs changes to it.
* Uploading an artifact, which is uploaded, but changes to it are not logged.
## Prerequisites
@@ -202,7 +202,7 @@ logger.report_text(
## Step 3: Registering Artifacts
-Registering an artifact uploads it to **ClearML Server**, and if it changes, the change is logged in **ClearML Server**.
+Registering an artifact uploads it to ClearML Server, and if it changes, the change is logged in ClearML Server.
Currently, ClearML supports Pandas DataFrames as registered artifacts.
### Register the Artifact
@@ -249,7 +249,7 @@ sample = Task.current_task().get_registered_artifacts()['Test_Loss_Correct'].sam
## Step 4: Uploading Artifacts
-Artifact can be uploaded to the **ClearML Server**, but changes are not logged.
+Artifact can be uploaded to the ClearML Server, but changes are not logged.
Supported artifacts include:
* Pandas DataFrames
diff --git a/docs/webapp/webapp_exp_track_visual.md b/docs/webapp/webapp_exp_track_visual.md
index 011ef5c3..47f96bd5 100644
--- a/docs/webapp/webapp_exp_track_visual.md
+++ b/docs/webapp/webapp_exp_track_visual.md
@@ -145,7 +145,7 @@ The **TF_DEFINE** parameter group shows automatic TensorFlow logging.

-Once an experiment is run and stored in **ClearML Server**, any of these hyperparameters can be [modified](webapp_exp_tuning.md#modifying-experiments).
+Once an experiment is run and stored in ClearML Server, any of these hyperparameters can be [modified](webapp_exp_tuning.md#modifying-experiments).
### User Properties
@@ -167,7 +167,7 @@ parameter in [`Task.connect_configuration`](../references/sdk/task.md#connect_co

:::important
-In older versions of **ClearML Server**, the Task model configuration appeared in the **ARTIFACTS** tab, **MODEL CONFIGURATION** section. Task model configurations now appear in the **Configuration Objects** section, in the **CONFIGURATION** tab.
+In older versions of ClearML Server, the Task model configuration appeared in the **ARTIFACTS** tab, **MODEL CONFIGURATION** section. Task model configurations now appear in the **Configuration Objects** section, in the **CONFIGURATION** tab.
:::
diff --git a/docs/webapp/webapp_exp_tuning.md b/docs/webapp/webapp_exp_tuning.md
index 035af7a2..82597cc4 100644
--- a/docs/webapp/webapp_exp_tuning.md
+++ b/docs/webapp/webapp_exp_tuning.md
@@ -118,7 +118,7 @@ Set a logging level for the experiment (see the standard Python [logging levels]
#### Hyperparameters
:::important
-In older versions of **ClearML Server**, the **CONFIGURATION** tab was named **HYPER PARAMETERS**, and it contained all
+In older versions of ClearML Server, the **CONFIGURATION** tab was named **HYPER PARAMETERS**, and it contained all
parameters. The renamed tab contains a **HYPER PARAMETER** section, and subsections for hyperparameter groups.
:::
@@ -158,7 +158,7 @@ except experiments whose status is *Published* (read-only).
#### Configuration Objects
:::important
-In older versions of **ClearML Server**, the Task model configuration appeared in the **ARTIFACTS** tab **>** **MODEL
+In older versions of ClearML Server, the Task model configuration appeared in the **ARTIFACTS** tab **>** **MODEL
CONFIGURATION** section. Task model configurations now appear in **CONFIGURATION** **>** **Configuration Objects**.
:::