diff --git a/docs/deploying_clearml/enterprise_deploy/sso_active_directory.md b/docs/deploying_clearml/enterprise_deploy/sso_active_directory.md index e3b4c9c2..03f91b31 100644 --- a/docs/deploying_clearml/enterprise_deploy/sso_active_directory.md +++ b/docs/deploying_clearml/enterprise_deploy/sso_active_directory.md @@ -30,12 +30,12 @@ To configure groups that should automatically become admins in ClearML set the f CLEARML__services__login__sso__saml_client__microsoft_ad__groups__admins=[, , ...] ``` -To change the the default Group Claim set the following environment variable: +To change the default Group Claim, set the following environment variable: ``` CLEARML__services__login__sso__saml_client__microsoft_ad__groups__claim=... ``` -To make group matching case insensitive set the following environment variable: +To make group matching case-insensitive, set the following environment variable: ``` CLEARML__services__login__sso__saml_client__microsoft_ad__groups__case_sensitive=false ``` diff --git a/docs/deploying_clearml/enterprise_deploy/sso_keycloak.md b/docs/deploying_clearml/enterprise_deploy/sso_keycloak.md index 4462e1af..3674953b 100644 --- a/docs/deploying_clearml/enterprise_deploy/sso_keycloak.md +++ b/docs/deploying_clearml/enterprise_deploy/sso_keycloak.md @@ -10,7 +10,7 @@ browser). In the following sections, you will be instructed to set up different environment variables for the ClearML Server. If using a `docker-compose` deployment, these should be defined in your `docker-compose.override.yaml` file, under the -`apiserver` service’ environment variables, as follows: +`apiserver` service’s environment variables, as follows: ``` services: diff --git a/docs/deploying_clearml/enterprise_deploy/sso_multi_tenant_login.md b/docs/deploying_clearml/enterprise_deploy/sso_multi_tenant_login.md index b1508c4f..cdd3970c 100644 --- a/docs/deploying_clearml/enterprise_deploy/sso_multi_tenant_login.md +++ b/docs/deploying_clearml/enterprise_deploy/sso_multi_tenant_login.md @@ -15,7 +15,7 @@ ClearML tenant can be associated with a particular external tenant /login /login/ ``` -3. Make sure the external tenant ID and groups are returned as claims for a each user +3. Make sure the external tenant ID and groups are returned as claims for each user ## Configure ClearML to use Multi-Tenant Mode diff --git a/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml.md b/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml.md index 754415c2..3abecade 100644 --- a/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml.md +++ b/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml.md @@ -202,7 +202,7 @@ you'll get is the best performance here because our checks already run, so you s open the PR, so basically the dummy task here was found to be the best performance, and it has been tagged but that means that every single time I open a PR or I update a PR, it will search ClearML, and get this dummy task. It will get this one, and then we say if we find the best task, if not we'll just add the best performance anyway because you're the -first task in the list, you'll always be getting best performance, but if you're not then we'll get the best latest +first task in the list, you'll always be getting the best performance, but if you're not then we'll get the best latest metric. For example `get_reported_scalars().get('Performance Metric').get('Series 1').get('y')`, so the `y` value there so this could basically be the best or the highest map from a task or the highest F1 score from a task, or any some such. Then you have the best metric. We do the same thing for the current task as well, and then it's fairly easy. We diff --git a/docs/guides/advanced/execute_remotely.md b/docs/guides/advanced/execute_remotely.md index 171dffb4..0a0b709b 100644 --- a/docs/guides/advanced/execute_remotely.md +++ b/docs/guides/advanced/execute_remotely.md @@ -28,7 +28,7 @@ moved to be executed by a stronger machine. During the execution of the example script, the code does the following: * Uses ClearML's automatic and explicit logging. -* Creates an task named `Remote_execution PyTorch MNIST train` in the `examples` project. +* Creates a task named `Remote_execution PyTorch MNIST train` in the `examples` project. ## Scalars diff --git a/docs/guides/frameworks/pytorch/pytorch_abseil.md b/docs/guides/frameworks/pytorch/pytorch_abseil.md index da1a4cf7..0975cb91 100644 --- a/docs/guides/frameworks/pytorch/pytorch_abseil.md +++ b/docs/guides/frameworks/pytorch/pytorch_abseil.md @@ -9,7 +9,7 @@ The example script does the following: * Trains a simple deep neural network on the PyTorch built-in [MNIST](https://pytorch.org/vision/stable/datasets.html#mnist) dataset * Creates a task named `pytorch mnist train with abseil` in the `examples` project -* ClearML automatically logs the absl.flags, and the models (and their snapshots) created by PyTorch +* ClearML automatically logs the `absl.flags`, and the models (and their snapshots) created by PyTorch * Additional metrics are logged by calling [`Logger.report_scalar()`](../../../references/sdk/logger.md#report_scalar) ## Scalars diff --git a/docs/guides/frameworks/tensorflow/tensorflow_mnist.md b/docs/guides/frameworks/tensorflow/tensorflow_mnist.md index a4afd8f9..d5abe5f7 100644 --- a/docs/guides/frameworks/tensorflow/tensorflow_mnist.md +++ b/docs/guides/frameworks/tensorflow/tensorflow_mnist.md @@ -4,7 +4,7 @@ title: TensorFlow MNIST The [tensorflow_mnist.py](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py) example demonstrates the integration of ClearML into code that uses TensorFlow and Keras to train a neural network on -the Keras built-in [MNIST](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist) handwritten digits dataset. +the Keras built-in [MNIST](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist) handwritten digit dataset. When the script runs, it creates a task named `Tensorflow v2 mnist with summaries` in the `examples` project. diff --git a/docs/hyperdatasets/webapp/webapp_exp_track_visual.md b/docs/hyperdatasets/webapp/webapp_exp_track_visual.md index 569d1fff..57c7d8f6 100644 --- a/docs/hyperdatasets/webapp/webapp_exp_track_visual.md +++ b/docs/hyperdatasets/webapp/webapp_exp_track_visual.md @@ -9,7 +9,7 @@ Dataviews are available under the ClearML Enterprise plan. While a task is running, and any time after it finishes, results are tracked and can be visualized in the ClearML Enterprise WebApp (UI). -In addition to all of ClearML's offerings, ClearML Enterprise keeps track of the Dataviews associated with an +In addition to all of ClearML's offerings, ClearML Enterprise keeps track of the Dataviews associated with a task, which can be viewed and [modified](webapp_exp_modifying.md) in the WebApp. ## Viewing a Task's Dataviews diff --git a/docs/pipelines/pipelines_sdk_function_decorators.md b/docs/pipelines/pipelines_sdk_function_decorators.md index 5345d116..c97216de 100644 --- a/docs/pipelines/pipelines_sdk_function_decorators.md +++ b/docs/pipelines/pipelines_sdk_function_decorators.md @@ -167,8 +167,8 @@ Additionally, you can enable automatic logging of a step's metrics / artifacts / following arguments: * `monitor_metrics` (optional) - Automatically log the step's reported metrics also on the pipeline Task. The expected format is one of the following: - * List of pairs metric (title, series) to log: [(step_metric_title, step_metric_series), ]. Example: `[('test', 'accuracy'), ]` - * List of tuple pairs, to specify a different target metric to use on the pipeline Task: [((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ]. + * List of pairs metric (title, series) to log: `[(step_metric_title, step_metric_series), ]`. Example: `[('test', 'accuracy'), ]` + * List of tuple pairs, to specify a different target metric to use on the pipeline Task: `[((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ]`. Example: `[[('test', 'accuracy'), ('model', 'accuracy')], ]` * `monitor_artifacts` (optional) - Automatically log the step's artifacts on the pipeline Task. * Provided a list of diff --git a/docs/pipelines/pipelines_sdk_tasks.md b/docs/pipelines/pipelines_sdk_tasks.md index f1d03fd8..14068880 100644 --- a/docs/pipelines/pipelines_sdk_tasks.md +++ b/docs/pipelines/pipelines_sdk_tasks.md @@ -221,8 +221,8 @@ You can enable automatic logging of a step's metrics /artifacts / models to the * `monitor_metrics` (optional) - Automatically log the step's reported metrics also on the pipeline Task. The expected format is one of the following: - * List of pairs metric (title, series) to log: [(step_metric_title, step_metric_series), ]. Example: `[('test', 'accuracy'), ]` - * List of tuple pairs, to specify a different target metric to use on the pipeline Task: [((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ]. + * List of pairs metric (title, series) to log: `[(step_metric_title, step_metric_series), ]`. Example: `[('test', 'accuracy'), ]` + * List of tuple pairs, to specify a different target metric to use on the pipeline Task: `[((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ]`. Example: `[[('test', 'accuracy'), ('model', 'accuracy')], ]` * `monitor_artifacts` (optional) - Automatically log the step's artifacts on the pipeline Task. * Provided a list of artifact names created by the step function, these artifacts will be logged automatically also