Small edits ()

This commit is contained in:
pollfly 2023-11-05 10:30:37 +02:00 committed by GitHub
parent a38dab6fd0
commit 8c4d299efd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 34 additions and 33 deletions

View File

@ -28,8 +28,8 @@ ClearML Data supports multiple ways to create datasets programmatically, which p
will inherit its data
* [`Dataset.squash()`](#datasetsquash) - Generate a new dataset from by squashing together a set of related datasets
You can add metadata to your datasets using the `Dataset.set_metadata` method, and access the metadata using the
`Dataset.get_metadata` method. See [`set_metadata`](../references/sdk/dataset.md#set_metadata) and [`get_metadata`](../references/sdk/dataset.md#get_metadata).
You can add metadata to your datasets using [`Dataset.set_metadata()`](../references/sdk/dataset.md#set_metadata),
and access the metadata using [`Dataset.get_metadata()`](../references/sdk/dataset.md#get_metadata).
### Dataset.create()

View File

@ -102,7 +102,7 @@ hyperparameters. Passing `alias=<dataset_alias_string>` stores the dataset's ID
`dataset_alias_string` parameter in the experiment's **CONFIGURATION > HYPERPARAMETERS > Datasets** section. This way
you can easily track which dataset the task is using.
The Dataset's [`get_local_copy`](../../references/sdk/dataset.md#get_local_copy) method returns a path to the cached,
[`Dataset.get_local_copy`](../../references/sdk/dataset.md#get_local_copy) returns a path to the cached,
downloaded dataset. Then the dataset path is input to PyTorch's `datasets` object.
The script then trains a neural network to classify images using the dataset created above.

View File

@ -241,8 +241,8 @@ You can also specify per-endpoint log frequency with the `clearml-serving` CLI.
See examples of ClearML Serving with other supported frameworks:
* [Scikit-Learn](https://github.com/allegroai/clearml-serving/blob/main/examples/sklearn/readme.md) - random data
* [Scikit-Learn Model Ensemble](https://github.com/allegroai/clearml-serving/blob/main/examples/ensemble/readme.md) - random data
* [scikit-learn](https://github.com/allegroai/clearml-serving/blob/main/examples/sklearn/readme.md) - random data
* [scikit-learn Model Ensemble](https://github.com/allegroai/clearml-serving/blob/main/examples/ensemble/readme.md) - random data
* [XGBoost](https://github.com/allegroai/clearml-serving/blob/main/examples/xgboost/readme.md) - iris dataset
* [LightGBM](https://github.com/allegroai/clearml-serving/blob/main/examples/lightgbm/readme.md) - iris dataset
* [PyTorch](https://github.com/allegroai/clearml-serving/blob/main/examples/pytorch/readme.md) - mnist dataset

View File

@ -416,7 +416,7 @@ match_rules: [
image: "nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04"
arguments: "-e define=value"
match: {
script{
script {
# Optional: must match all requirements (not partial)
requirements: {
# version selection matching PEP-440

View File

@ -128,7 +128,7 @@ When a new ClearML Server version is available, the notification is:
#### How do I find out ClearML version information? <a id="versions"></a>
ClearML server version information is available in the ClearML webapp Settings page. On the bottom right of the page,
ClearML server version information is available in the ClearML WebApp **Settings** page. On the bottom right of the page,
it says **Version**, followed by three numbers: the web application version, the API server version, and the API version.
![Server version information](img/faq_server_versions.png)

View File

@ -115,7 +115,7 @@ under the "Input Models" section.
Check out model snapshots examples for [TensorFlow](https://github.com/allegroai/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py),
[PyTorch](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py),
[Keras](https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py),
[Scikit-Learn](https://github.com/allegroai/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py).
[scikit-learn](https://github.com/allegroai/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py).
#### Loading Models
Loading a previously trained model is quite similar to loading artifacts.

View File

@ -1,5 +1,5 @@
---
title: Scikit-Learn with Joblib
title: scikit-learn with Joblib
---
The [sklearn_joblib_example.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py)

View File

@ -50,7 +50,7 @@ The sections below describe in more detail what happens in the controller task a
1. Build the pipeline (see [PipelineController.add_step](../../references/sdk/automation_controller_pipelinecontroller.md#add_step)
method for complete reference):
The pipeline's [first step](#step-1---downloading-the-datae) uses the pre-existing task
The pipeline's [first step](#step-1---downloading-the-data) uses the pre-existing task
`pipeline step 1 dataset artifact` in the `examples` project. The step uploads local data and stores it as an artifact.
```python

View File

@ -27,7 +27,7 @@ Logger.current_logger().report_surface(
zaxis="title Z",
)
```
Visualize the reported surface plot in **PLOTS**.
View the reported surface plot in **PLOTS**.
![Surface plot](../../img/examples_reporting_02.png)
@ -49,5 +49,5 @@ Logger.current_logger().report_scatter3d(
)
```
Visualize the reported 3D scatter plot in **PLOTS**.
View the reported 3D scatter plot in **PLOTS**.
![3d scatter plot](../../img/examples_reporting_01.png)

View File

@ -17,8 +17,8 @@ In the ``clearml`` GitHub repository, this example includes a clickable icon to
## Scalars
To reports scalars, call the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method. The scalar plots appear in the **web UI** in **SCALARS**.
To reports scalars, call [Logger.report_scalar()](../../references/sdk/logger.md#report_scalar).
The scalar plots appear in the **web UI** in **SCALARS**.
```python
# report two scalar series on two different graphs
@ -44,7 +44,7 @@ Plots appear in **PLOTS**.
### 2D Plots
Report 2D scatter plots by calling the [Logger.report_scatter2d](../../references/sdk/logger.md#report_scatter2d) method.
Report 2D scatter plots by calling [Logger.report_scatter2d()](../../references/sdk/logger.md#report_scatter2d).
Use the `mode` parameter to plot data points as markers, or both lines and markers.
```python
@ -67,7 +67,7 @@ logger.report_scatter2d(
### 3D Plots
To plot a series as a 3D scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d) method.
To plot a series as a 3D scatter plot, use [Logger.report_scatter3d()](../../references/sdk/logger.md#report_scatter3d).
```python
# report 3d scatter plot
@ -85,8 +85,7 @@ logger.report_scatter3d(
![3d scatter plot](../../img/colab_explicit_reporting_05.png)
To plot a series as a surface plot, use the [Logger.report_surface](../../references/sdk/logger.md#report_surface)
method.
To plot a series as a surface plot, use [Logger.report_surface()](../../references/sdk/logger.md#report_surface).
```python
# report 3d surface

View File

@ -25,8 +25,7 @@ output_model = OutputModel(task=task)
## Label Enumeration
Set the model's label enumeration using the [`OutputModel.update_labels`](../../references/sdk/model_outputmodel.md#update_labels)
method.
Set the model's label enumeration using [`OutputModel.update_labels()`](../../references/sdk/model_outputmodel.md#update_labels).
```python
labels = {"background": 0, "cat": 1, "dog": 2}
@ -34,8 +33,8 @@ output_model.update_labels(labels)
```
## Registering Models
Register a previously trained model using the [`OutputModel.update_weights`](../../references/sdk/model_outputmodel.md#update_weights)
method. The example code uses a model stored in S3.
Register a previously trained model using [`OutputModel.update_weights()`](../../references/sdk/model_outputmodel.md#update_weights).
The example code uses a model stored in S3.
```python
# Manually log a model file, which will have the labels connected above

View File

@ -51,7 +51,7 @@ The experiments table allows filtering experiments by experiment name, type, and
* **Aborted** - The experiment ran and was manually or programmatically terminated.
* **Published** - The experiment is not running, it is preserved as read-only.
## Step 3: Hide the Defaults Column
## Step 3: Hide the Default Columns
Customize the columns on the tracking leaderboard by hiding any of the default columns shown below.

View File

@ -1,5 +1,5 @@
---
title: Scikit-Learn
title: scikit-learn
---
:::tip
@ -7,7 +7,7 @@ If you are not already using ClearML, see [Getting Started](../getting_started/d
instructions.
:::
ClearML integrates seamlessly with [Scikit-Learn](https://scikit-learn.org/stable/), automatically logging models created
ClearML integrates seamlessly with [scikit-learn](https://scikit-learn.org/stable/), automatically logging models created
with `joblib`.
All you have to do is simply add two lines of code to your scikit-learn script:
@ -73,8 +73,8 @@ See [Explicit Reporting Tutorial](../guides/reporting/explicit_reporting.md).
Take a look at ClearML's scikit-learn examples. The examples use scikit-learn and ClearML in different configurations with
additional tools, like Matplotlib:
* [Scikit-Learn with Joblib](../guides/frameworks/scikit-learn/sklearn_joblib_example.md) - Demonstrates ClearML automatically logging the models created with joblib and a scatter plot created by Matplotlib.
* [Scikit-Learn with Matplotlib](../guides/frameworks/scikit-learn/sklearn_matplotlib_example.md) - Demonstrates ClearML automatically logging scatter diagrams created with Matplotlib.
* [scikit-learn with Joblib](../guides/frameworks/scikit-learn/sklearn_joblib_example.md) - Demonstrates ClearML automatically logging the models created with joblib and a scatter plot created by Matplotlib.
* [scikit-learn with Matplotlib](../guides/frameworks/scikit-learn/sklearn_matplotlib_example.md) - Demonstrates ClearML automatically logging scatter diagrams created with Matplotlib.
## Remote Execution

View File

@ -28,7 +28,7 @@ All you have to do is install and set up ClearML:
Thats it! In every training run from now on, the ClearML experiment
manager will capture:
* Source code and uncommitted changes
* Hyperparameters - PyTorch trainer [parameters](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/trainer#transformers.TrainingArguments),
* Hyperparameters - PyTorch trainer [parameters](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/trainer#transformers.TrainingArguments)
and TensorFlow definitions
* Installed packages
* Model files (make sure the `CLEARML_LOG_MODEL` environment variable is set to `True`)

View File

@ -35,7 +35,7 @@ continue to train and test new model versions.
ClearML supports automatic and manual registration of models to the model catalog.
### Automatic Logging
ClearML automatically logs models created/loaded through popular frameworks like TensorFlow or Scikit-Learn; all you
ClearML automatically logs models created/loaded through popular frameworks like TensorFlow or scikit-learn; all you
need to do is [instantiate a ClearML Task](clearml_sdk/task_sdk.md#task-creation) in your code. ClearML stores the
framework's training results as output models.

View File

@ -52,7 +52,10 @@ For files, call `connect_configuration()` before reading the configuration file.
path.
```python
config_file = pipe.connect_configuration(configuration=config_file_path, name="My Configuration", description="configuration for pipeline")
config_file = pipe.connect_configuration(
configuration=config_file_path,
name="My Configuration", description="configuration for pipeline"
)
my_params = json.load(open(config_file,'rt'))
```

View File

@ -116,7 +116,7 @@ clicking the checkbox in the top left corner of the list.
Click the checkbox in the top left corner of the list to select all items currently visible.
An extended bulk selection tool is available through the down arrow next to the checkbox in the top left corner, enabling selecting items beyond the items currently on-screen:
* All - Select all versions in the dataset
* None - Clear selection
* Filtered - Select all versions in the dataset that match the current active filters
* **All** - Select all versions in the dataset
* **None** - Clear selection
* **Filtered** - Select all versions in the dataset that match the current active filters