small edits

This commit is contained in:
revital 2025-04-06 10:32:37 +03:00
parent af0a433690
commit 49b1b65688
10 changed files with 13 additions and 13 deletions

View File

@ -30,12 +30,12 @@ To configure groups that should automatically become admins in ClearML set the f
CLEARML__services__login__sso__saml_client__microsoft_ad__groups__admins=[<admin_group_name1>, <admin_group_name2>, ...]
```
To change the the default Group Claim set the following environment variable:
To change the default Group Claim, set the following environment variable:
```
CLEARML__services__login__sso__saml_client__microsoft_ad__groups__claim=...
```
To make group matching case insensitive set the following environment variable:
To make group matching case-insensitive, set the following environment variable:
```
CLEARML__services__login__sso__saml_client__microsoft_ad__groups__case_sensitive=false
```

View File

@ -10,7 +10,7 @@ browser).
In the following sections, you will be instructed to set up different environment variables for the ClearML Server. If
using a `docker-compose` deployment, these should be defined in your `docker-compose.override.yaml` file, under the
`apiserver` service environment variables, as follows:
`apiserver` services environment variables, as follows:
```
services:

View File

@ -15,7 +15,7 @@ ClearML tenant can be associated with a particular external tenant
<clearml_webapp_address>/login
<clearml_webapp_address>/login/<external tenant ID>
```
3. Make sure the external tenant ID and groups are returned as claims for a each user
3. Make sure the external tenant ID and groups are returned as claims for each user
## Configure ClearML to use Multi-Tenant Mode

View File

@ -202,7 +202,7 @@ you'll get is the best performance here because our checks already run, so you s
open the PR, so basically the dummy task here was found to be the best performance, and it has been tagged but that
means that every single time I open a PR or I update a PR, it will search ClearML, and get this dummy task. It will get
this one, and then we say if we find the best task, if not we'll just add the best performance anyway because you're the
first task in the list, you'll always be getting best performance, but if you're not then we'll get the best latest
first task in the list, you'll always be getting the best performance, but if you're not then we'll get the best latest
metric. For example `get_reported_scalars().get('Performance Metric').get('Series 1').get('y')`, so the `y` value there
so this could basically be the best or the highest map from a task or the highest F1 score from a task, or any some
such. Then you have the best metric. We do the same thing for the current task as well, and then it's fairly easy. We

View File

@ -28,7 +28,7 @@ moved to be executed by a stronger machine.
During the execution of the example script, the code does the following:
* Uses ClearML's automatic and explicit logging.
* Creates an task named `Remote_execution PyTorch MNIST train` in the `examples` project.
* Creates a task named `Remote_execution PyTorch MNIST train` in the `examples` project.
## Scalars

View File

@ -9,7 +9,7 @@ The example script does the following:
* Trains a simple deep neural network on the PyTorch built-in [MNIST](https://pytorch.org/vision/stable/datasets.html#mnist)
dataset
* Creates a task named `pytorch mnist train with abseil` in the `examples` project
* ClearML automatically logs the absl.flags, and the models (and their snapshots) created by PyTorch
* ClearML automatically logs the `absl.flags`, and the models (and their snapshots) created by PyTorch
* Additional metrics are logged by calling [`Logger.report_scalar()`](../../../references/sdk/logger.md#report_scalar)
## Scalars

View File

@ -4,7 +4,7 @@ title: TensorFlow MNIST
The [tensorflow_mnist.py](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py)
example demonstrates the integration of ClearML into code that uses TensorFlow and Keras to train a neural network on
the Keras built-in [MNIST](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist) handwritten digits dataset.
the Keras built-in [MNIST](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist) handwritten digit dataset.
When the script runs, it creates a task named `Tensorflow v2 mnist with summaries` in the `examples` project.

View File

@ -9,7 +9,7 @@ Dataviews are available under the ClearML Enterprise plan.
While a task is running, and any time after it finishes, results are tracked and can be visualized in the ClearML
Enterprise WebApp (UI).
In addition to all of ClearML's offerings, ClearML Enterprise keeps track of the Dataviews associated with an
In addition to all of ClearML's offerings, ClearML Enterprise keeps track of the Dataviews associated with a
task, which can be viewed and [modified](webapp_exp_modifying.md) in the WebApp.
## Viewing a Task's Dataviews

View File

@ -167,8 +167,8 @@ Additionally, you can enable automatic logging of a step's metrics / artifacts /
following arguments:
* `monitor_metrics` (optional) - Automatically log the step's reported metrics also on the pipeline Task. The expected
format is one of the following:
* List of pairs metric (title, series) to log: [(step_metric_title, step_metric_series), ]. Example: `[('test', 'accuracy'), ]`
* List of tuple pairs, to specify a different target metric to use on the pipeline Task: [((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ].
* List of pairs metric (title, series) to log: `[(step_metric_title, step_metric_series), ]`. Example: `[('test', 'accuracy'), ]`
* List of tuple pairs, to specify a different target metric to use on the pipeline Task: `[((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ]`.
Example: `[[('test', 'accuracy'), ('model', 'accuracy')], ]`
* `monitor_artifacts` (optional) - Automatically log the step's artifacts on the pipeline Task.
* Provided a list of

View File

@ -221,8 +221,8 @@ You can enable automatic logging of a step's metrics /artifacts / models to the
* `monitor_metrics` (optional) - Automatically log the step's reported metrics also on the pipeline Task. The expected
format is one of the following:
* List of pairs metric (title, series) to log: [(step_metric_title, step_metric_series), ]. Example: `[('test', 'accuracy'), ]`
* List of tuple pairs, to specify a different target metric to use on the pipeline Task: [((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ].
* List of pairs metric (title, series) to log: `[(step_metric_title, step_metric_series), ]`. Example: `[('test', 'accuracy'), ]`
* List of tuple pairs, to specify a different target metric to use on the pipeline Task: `[((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ]`.
Example: `[[('test', 'accuracy'), ('model', 'accuracy')], ]`
* `monitor_artifacts` (optional) - Automatically log the step's artifacts on the pipeline Task.
* Provided a list of artifact names created by the step function, these artifacts will be logged automatically also