Small edits (#144)

This commit is contained in:
pollfly 2021-12-27 10:41:43 +02:00 committed by GitHub
parent 6962630aaa
commit 16ffa620b6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 29 additions and 27 deletions

View File

@ -41,7 +41,7 @@ and [configuration options](configs/clearml_conf.md#agent-section).
## Installation ## Installation
:::note :::note
If **ClearML** was previously configured, follow [this](clearml_agent#adding-clearml-agent-to-a-configuration-file) to add If **ClearML** was previously configured, follow [this](#adding-clearml-agent-to-a-configuration-file) to add
ClearML Agent specific configurations ClearML Agent specific configurations
::: :::

View File

@ -50,7 +50,7 @@ The minimum recommended amount of RAM is 8 GB. For example, a t3.large or t3a.la
1. Open the AWS Marketplace for the [Allegro AI ClearML Server](https://aws.amazon.com/marketplace/pp/B085D8W5NM). 1. Open the AWS Marketplace for the [Allegro AI ClearML Server](https://aws.amazon.com/marketplace/pp/B085D8W5NM).
1. In the heading area, click **Continue to Subscribe**. 1. In the heading area, click **Continue to Subscribe**.
1. **On the Subscribe to software** page, click **Accept Terms**, and then click **Continue to Configuration**. 1. On the **Subscribe to software** page, click **Accept Terms**, and then click **Continue to Configuration**.
1. On the **Configure this software** page, complete the following: 1. On the **Configure this software** page, complete the following:
1. In the **Fulfillment Option** list, select **64-bit (x86) Amazon Machine Image (AMI)**. 1. In the **Fulfillment Option** list, select **64-bit (x86) Amazon Machine Image (AMI)**.

View File

@ -155,7 +155,7 @@ def main(pickle_url, mock_parameter='mock'):
X_train, X_test, y_train, y_test = step_two(data_frame) X_train, X_test, y_train, y_test = step_two(data_frame)
model = step_three(X_train, y_train) model = step_three(X_train, y_train)
accuracy = 100 * step_four(model, X_data=X_test, Y_data=y_test) accuracy = 100 * step_four(model, X_data=X_test, Y_data=y_test)
print(fAccuracy={accuracy}%) print(f"Accuracy={accuracy}%")
``` ```
Notice that the driver is the `main` function, calling ("launching") the different steps. Next we add the decorators over Notice that the driver is the `main` function, calling ("launching") the different steps. Next we add the decorators over
@ -222,7 +222,7 @@ def main(pickle_url, mock_parameter='mock'):
X_train, X_test, y_train, y_test = step_two(data_frame) X_train, X_test, y_train, y_test = step_two(data_frame)
model = step_three(X_train, y_train) model = step_three(X_train, y_train)
accuracy = 100 * step_four(model, X_data=X_test, Y_data=y_test) accuracy = 100 * step_four(model, X_data=X_test, Y_data=y_test)
print(fAccuracy={accuracy}%) print(f"Accuracy={accuracy}%")
``` ```
We wrap each pipeline component with `@PipelineDecorator.component`, and the main pipeline logic with We wrap each pipeline component with `@PipelineDecorator.component`, and the main pipeline logic with

View File

@ -18,23 +18,23 @@ If you are afraid of clutter, use the archive option, and set up your own [clean
## Clone Tasks ## Clone Tasks
In order to define a Task in ClearML we have two options In order to define a Task in ClearML we have two options
- Run the actual code with `task.init` call. This will create and auto-populate the Task in CleaML (including Git Repo/Python Packages/ Command line etc.). - Run the actual code with `task.init` call. This will create and auto-populate the Task in CleaML (including Git Repo / Python Packages / Command line etc.).
- Register local/remote code repository with `clearml-task`. See [details](../../apps/clearml_task.md). - Register local / remote code repository with `clearml-task`. See [details](../../apps/clearml_task.md).
Once we have a Task in ClearML, we can clone and edit its definitions in the UI, then launch it on one of our nodes with [ClearML Agent](../../clearml_agent.md). Once we have a Task in ClearML, we can clone and edit its definitions in the UI, then launch it on one of our nodes with [ClearML Agent](../../clearml_agent.md).
## Advanced Automation ## Advanced Automation
- Create daily/weekly cron jobs for retraining best performing models on. - Create daily / weekly cron jobs for retraining best performing models on.
- Create data monitoring & scheduling and launch inference jobs to test performance on any new coming dataset. - Create data monitoring & scheduling and launch inference jobs to test performance on any new coming dataset.
- Once there are two or more experiments that run after another, group them together into a [pipeline](../../fundamentals/pipelines.md). - Once there are two or more experiments that run after another, group them together into a [pipeline](../../fundamentals/pipelines.md).
## Manage Your Data ## Manage Your Data
Use [ClearML Data](../../clearml_data/clearml_data.md) to version your data, then link it to running experiments for easy reproduction. Use [ClearML Data](../../clearml_data/clearml_data.md) to version your data, then link it to running experiments for easy reproduction.
Make datasets machine agnostic (i.e. store original dataset in a shared storage location, e.g. shared-folder/S3/Gs/Azure). Make datasets machine agnostic (i.e. store original dataset in a shared storage location, e.g. shared-folder / S3 / Gs / Azure).
ClearML Data supports efficient Dataset storage and caching, differentiable & compressed. ClearML Data supports efficient Dataset storage and caching, differentiable & compressed.
## Scale Your Work ## Scale Your Work
Use [ClearML Agent](../../clearml_agent.md) to scale work. Install the agent machines (Remote or local) and manage Use [ClearML Agent](../../clearml_agent.md) to scale work. Install the agent machines (remote or local) and manage
training workload with it. training workload with it.
Improve team collaboration by transparent resource monitoring, always know what is running where. Improve team collaboration by transparent resource monitoring, always know what is running where.

View File

@ -1,5 +1,5 @@
--- ---
title: Fastai title: FastAI
--- ---
The [fastai_with_tensorboard.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/fastai/fastai_with_tensorboard.py) The [fastai_with_tensorboard.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/fastai/fastai_with_tensorboard.py)
example demonstrates the integration of **ClearML** into code that uses fastai and TensorBoard. example demonstrates the integration of **ClearML** into code that uses fastai and TensorBoard.

View File

@ -1,5 +1,5 @@
--- ---
title: MegEngine MNIST title: MegEngine
--- ---
The [megengine_mnist.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/megengine/megengine_mnist.py) The [megengine_mnist.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/megengine/megengine_mnist.py)

View File

@ -15,12 +15,14 @@ When the script runs, it creates an experiment named `html samples reporting`, w
## Reporting HTML URLs ## Reporting HTML URLs
Report HTML by URL, using the `Logger.report_media` method `url` parameter. Report HTML by URL, using the [Logger.report_media](../../references/sdk/logger.md#report_media) method's `url` parameter.
See the example script's [report_html_url](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L16) See the example script's [report_html_url](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L16)
function, which reports the **ClearML** documentation's home page. function, which reports the **ClearML** documentation's home page.
Logger.current_logger().report_media("html", "url_html", iteration=iteration, url="https://allegro.ai/docs/index.html") ```python
Logger.current_logger().report_media("html", "url_html", iteration=iteration, url="https://clear.ml/docs")
```
## Reporting HTML Local Files ## Reporting HTML Local Files

View File

@ -10,10 +10,10 @@ demonstrates reporting (uploading) images in several formats, including:
* PIL Image objects * PIL Image objects
* Local files. * Local files.
**ClearML** uploads images to the bucket specified in the **ClearML** configuration file ClearML uploads images to the bucket specified in the ClearML [configuration file](../../configs/clearml_conf.md),
or **ClearML** can be configured for image storage, see [Logger.set_default_upload_destination](../../references/sdk/logger.md#set_default_upload_destination) or ClearML can be configured for image storage, see [Logger.set_default_upload_destination](../../references/sdk/logger.md#set_default_upload_destination)
(storage for [artifacts](../../fundamentals/artifacts.md#setting-upload-destination) is different). Set credentials for (storage for [artifacts](../../fundamentals/artifacts.md#setting-upload-destination) is different). Set credentials for
storage in the **ClearML** configuration file. storage in the ClearML configuration file.
When the script runs, it creates an experiment named `image reporting`, which is associated with the `examples` project. When the script runs, it creates an experiment named `image reporting`, which is associated with the `examples` project.
@ -48,7 +48,7 @@ Logger.current_logger().report_image(
) )
``` ```
**ClearML** reports these images as debug samples in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab ClearML reports these images as debug samples in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab
**>** **DEBUG SAMPLES** sub-tab. **>** **DEBUG SAMPLES** sub-tab.
![image](../../img/examples_reporting_07.png) ![image](../../img/examples_reporting_07.png)

View File

@ -5,8 +5,8 @@ title: Manual Matplotlib Reporting
The [matplotlib_manual_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/matplotlib_manual_reporting.py) The [matplotlib_manual_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/matplotlib_manual_reporting.py)
example demonstrates reporting using Matplotlib and Seaborn with **ClearML**. example demonstrates reporting using Matplotlib and Seaborn with **ClearML**.
When the script runs, it creates an experiment named "Manual Matplotlib example", which is associated with the When the script runs, it creates an experiment named `Manual Matplotlib example`, which is associated with the
examples project. `examples` project.
The Matplotlib figure reported by calling the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure) The Matplotlib figure reported by calling the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
method appears in **RESULTS** **>** **PLOTS**. method appears in **RESULTS** **>** **PLOTS**.

View File

@ -7,7 +7,7 @@ through parametrized data access and meta-data version control.
The basic premise is that a user-formed query is a full representation of the dataset used by the ML/DL process. The basic premise is that a user-formed query is a full representation of the dataset used by the ML/DL process.
ClearML Enterprise's hyperdatasets supports rapid prototyping, creating new opportunities such as: ClearML Enterprise's Hyper-Datasets supports rapid prototyping, creating new opportunities such as:
* Hyperparameter optimization of the data itself * Hyperparameter optimization of the data itself
* QA/QC pipelining * QA/QC pipelining
* CD/CT (continuous training) during deployment * CD/CT (continuous training) during deployment
@ -28,7 +28,7 @@ These components interact in a way that enables revising data and tracking and a
Frames are the basics units of data in ClearML Enterprise. SingleFrames and FrameGroups make up a Dataset version. Frames are the basics units of data in ClearML Enterprise. SingleFrames and FrameGroups make up a Dataset version.
Dataset versions can be created, modified, and removed. The different version are recorded and available, Dataset versions can be created, modified, and removed. The different version are recorded and available,
so experiments and their data are reproducible and traceable. so experiments, and their data are reproducible and traceable.
Lastly, Dataviews manage views of the dataset with queries, so the input data to an experiment can be defined from a Lastly, Dataviews manage views of the dataset with queries, so the input data to an experiment can be defined from a
subset of a Dataset or combinations of Datasets. subset of a Dataset or combinations of Datasets.

View File

@ -67,11 +67,11 @@ module.exports = {
{'Docker': ['guides/docker/extra_docker_shell_script']}, {'Docker': ['guides/docker/extra_docker_shell_script']},
{'Frameworks': [ {'Frameworks': [
{'Autokeras': ['guides/frameworks/autokeras/integration_autokeras', 'guides/frameworks/autokeras/autokeras_imdb_example']}, {'Autokeras': ['guides/frameworks/autokeras/integration_autokeras', 'guides/frameworks/autokeras/autokeras_imdb_example']},
{'FastAI': ['guides/frameworks/fastai/fastai_with_tensorboard']}, 'guides/frameworks/fastai/fastai_with_tensorboard',
{'Keras': ['guides/frameworks/keras/jupyter', 'guides/frameworks/keras/keras_tensorboard']}, {'Keras': ['guides/frameworks/keras/jupyter', 'guides/frameworks/keras/keras_tensorboard']},
{'LightGBM': ['guides/frameworks/lightgbm/lightgbm_example']}, 'guides/frameworks/lightgbm/lightgbm_example',
{'Matplotlib': ['guides/frameworks/matplotlib/matplotlib_example']}, 'guides/frameworks/matplotlib/matplotlib_example',
{'MegEngine':['guides/frameworks/megengine/megengine_mnist']}, 'guides/frameworks/megengine/megengine_mnist',
{'PyTorch': {'PyTorch':
['guides/frameworks/pytorch/pytorch_distributed_example', 'guides/frameworks/pytorch/pytorch_matplotlib', ['guides/frameworks/pytorch/pytorch_distributed_example', 'guides/frameworks/pytorch/pytorch_matplotlib',
'guides/frameworks/pytorch/pytorch_mnist', 'guides/frameworks/pytorch/pytorch_tensorboard', 'guides/frameworks/pytorch/pytorch_tensorboardx', 'guides/frameworks/pytorch/pytorch_mnist', 'guides/frameworks/pytorch/pytorch_tensorboard', 'guides/frameworks/pytorch/pytorch_tensorboardx',
@ -85,14 +85,14 @@ module.exports = {
] ]
}, },
{'PyTorch Ignite': ['guides/frameworks/pytorch ignite/integration_pytorch_ignite', 'guides/frameworks/pytorch ignite/pytorch_ignite_mnist']}, {'PyTorch Ignite': ['guides/frameworks/pytorch ignite/integration_pytorch_ignite', 'guides/frameworks/pytorch ignite/pytorch_ignite_mnist']},
{'PyTorch Lightning': ['guides/frameworks/pytorch_lightning/pytorch_lightning_example']}, 'guides/frameworks/pytorch_lightning/pytorch_lightning_example',
{'Scikit-Learn': ['guides/frameworks/scikit-learn/sklearn_joblib_example', 'guides/frameworks/scikit-learn/sklearn_matplotlib_example']}, {'Scikit-Learn': ['guides/frameworks/scikit-learn/sklearn_joblib_example', 'guides/frameworks/scikit-learn/sklearn_matplotlib_example']},
{'TensorBoardX': ['guides/frameworks/tensorboardx/tensorboardx', "guides/frameworks/tensorboardx/video_tensorboardx"]}, {'TensorBoardX': ['guides/frameworks/tensorboardx/tensorboardx', "guides/frameworks/tensorboardx/video_tensorboardx"]},
{ {
'Tensorflow': ['guides/frameworks/tensorflow/tensorboard_pr_curve', 'guides/frameworks/tensorflow/tensorboard_toy', 'Tensorflow': ['guides/frameworks/tensorflow/tensorboard_pr_curve', 'guides/frameworks/tensorflow/tensorboard_toy',
'guides/frameworks/tensorflow/tensorflow_mnist', 'guides/frameworks/tensorflow/integration_keras_tuner'] 'guides/frameworks/tensorflow/tensorflow_mnist', 'guides/frameworks/tensorflow/integration_keras_tuner']
}, },
{'XGboost': ['guides/frameworks/xgboost/xgboost_sample']} 'guides/frameworks/xgboost/xgboost_sample'
]}, ]},
{'IDEs': ['guides/ide/remote_jupyter_tutorial', 'guides/ide/integration_pycharm', 'guides/ide/google_colab']}, {'IDEs': ['guides/ide/remote_jupyter_tutorial', 'guides/ide/integration_pycharm', 'guides/ide/google_colab']},
{'Offline Mode':['guides/set_offline']}, {'Offline Mode':['guides/set_offline']},