Small edits (#144)

This commit is contained in:
pollfly 2021-12-27 10:41:43 +02:00 committed by GitHub
parent 6962630aaa
commit 16ffa620b6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 29 additions and 27 deletions

View File

@ -41,7 +41,7 @@ and [configuration options](configs/clearml_conf.md#agent-section).
## Installation
:::note
If **ClearML** was previously configured, follow [this](clearml_agent#adding-clearml-agent-to-a-configuration-file) to add
If **ClearML** was previously configured, follow [this](#adding-clearml-agent-to-a-configuration-file) to add
ClearML Agent specific configurations
:::

View File

@ -50,7 +50,7 @@ The minimum recommended amount of RAM is 8 GB. For example, a t3.large or t3a.la
1. Open the AWS Marketplace for the [Allegro AI ClearML Server](https://aws.amazon.com/marketplace/pp/B085D8W5NM).
1. In the heading area, click **Continue to Subscribe**.
1. **On the Subscribe to software** page, click **Accept Terms**, and then click **Continue to Configuration**.
1. On the **Subscribe to software** page, click **Accept Terms**, and then click **Continue to Configuration**.
1. On the **Configure this software** page, complete the following:
1. In the **Fulfillment Option** list, select **64-bit (x86) Amazon Machine Image (AMI)**.

View File

@ -155,7 +155,7 @@ def main(pickle_url, mock_parameter='mock'):
X_train, X_test, y_train, y_test = step_two(data_frame)
model = step_three(X_train, y_train)
accuracy = 100 * step_four(model, X_data=X_test, Y_data=y_test)
print(fAccuracy={accuracy}%)
print(f"Accuracy={accuracy}%")
```
Notice that the driver is the `main` function, calling ("launching") the different steps. Next we add the decorators over
@ -222,7 +222,7 @@ def main(pickle_url, mock_parameter='mock'):
X_train, X_test, y_train, y_test = step_two(data_frame)
model = step_three(X_train, y_train)
accuracy = 100 * step_four(model, X_data=X_test, Y_data=y_test)
print(fAccuracy={accuracy}%)
print(f"Accuracy={accuracy}%")
```
We wrap each pipeline component with `@PipelineDecorator.component`, and the main pipeline logic with

View File

@ -34,7 +34,7 @@ Make datasets machine agnostic (i.e. store original dataset in a shared storage
ClearML Data supports efficient Dataset storage and caching, differentiable & compressed.
## Scale Your Work
Use [ClearML Agent](../../clearml_agent.md) to scale work. Install the agent machines (Remote or local) and manage
Use [ClearML Agent](../../clearml_agent.md) to scale work. Install the agent machines (remote or local) and manage
training workload with it.
Improve team collaboration by transparent resource monitoring, always know what is running where.

View File

@ -1,5 +1,5 @@
---
title: Fastai
title: FastAI
---
The [fastai_with_tensorboard.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/fastai/fastai_with_tensorboard.py)
example demonstrates the integration of **ClearML** into code that uses fastai and TensorBoard.

View File

@ -1,5 +1,5 @@
---
title: MegEngine MNIST
title: MegEngine
---
The [megengine_mnist.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/megengine/megengine_mnist.py)

View File

@ -15,12 +15,14 @@ When the script runs, it creates an experiment named `html samples reporting`, w
## Reporting HTML URLs
Report HTML by URL, using the `Logger.report_media` method `url` parameter.
Report HTML by URL, using the [Logger.report_media](../../references/sdk/logger.md#report_media) method's `url` parameter.
See the example script's [report_html_url](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L16)
function, which reports the **ClearML** documentation's home page.
Logger.current_logger().report_media("html", "url_html", iteration=iteration, url="https://allegro.ai/docs/index.html")
```python
Logger.current_logger().report_media("html", "url_html", iteration=iteration, url="https://clear.ml/docs")
```
## Reporting HTML Local Files

View File

@ -10,10 +10,10 @@ demonstrates reporting (uploading) images in several formats, including:
* PIL Image objects
* Local files.
**ClearML** uploads images to the bucket specified in the **ClearML** configuration file
or **ClearML** can be configured for image storage, see [Logger.set_default_upload_destination](../../references/sdk/logger.md#set_default_upload_destination)
ClearML uploads images to the bucket specified in the ClearML [configuration file](../../configs/clearml_conf.md),
or ClearML can be configured for image storage, see [Logger.set_default_upload_destination](../../references/sdk/logger.md#set_default_upload_destination)
(storage for [artifacts](../../fundamentals/artifacts.md#setting-upload-destination) is different). Set credentials for
storage in the **ClearML** configuration file.
storage in the ClearML configuration file.
When the script runs, it creates an experiment named `image reporting`, which is associated with the `examples` project.
@ -48,7 +48,7 @@ Logger.current_logger().report_image(
)
```
**ClearML** reports these images as debug samples in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab
ClearML reports these images as debug samples in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab
**>** **DEBUG SAMPLES** sub-tab.
![image](../../img/examples_reporting_07.png)

View File

@ -5,8 +5,8 @@ title: Manual Matplotlib Reporting
The [matplotlib_manual_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/matplotlib_manual_reporting.py)
example demonstrates reporting using Matplotlib and Seaborn with **ClearML**.
When the script runs, it creates an experiment named "Manual Matplotlib example", which is associated with the
examples project.
When the script runs, it creates an experiment named `Manual Matplotlib example`, which is associated with the
`examples` project.
The Matplotlib figure reported by calling the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
method appears in **RESULTS** **>** **PLOTS**.

View File

@ -7,7 +7,7 @@ through parametrized data access and meta-data version control.
The basic premise is that a user-formed query is a full representation of the dataset used by the ML/DL process.
ClearML Enterprise's hyperdatasets supports rapid prototyping, creating new opportunities such as:
ClearML Enterprise's Hyper-Datasets supports rapid prototyping, creating new opportunities such as:
* Hyperparameter optimization of the data itself
* QA/QC pipelining
* CD/CT (continuous training) during deployment
@ -28,7 +28,7 @@ These components interact in a way that enables revising data and tracking and a
Frames are the basics units of data in ClearML Enterprise. SingleFrames and FrameGroups make up a Dataset version.
Dataset versions can be created, modified, and removed. The different version are recorded and available,
so experiments and their data are reproducible and traceable.
so experiments, and their data are reproducible and traceable.
Lastly, Dataviews manage views of the dataset with queries, so the input data to an experiment can be defined from a
subset of a Dataset or combinations of Datasets.

View File

@ -67,11 +67,11 @@ module.exports = {
{'Docker': ['guides/docker/extra_docker_shell_script']},
{'Frameworks': [
{'Autokeras': ['guides/frameworks/autokeras/integration_autokeras', 'guides/frameworks/autokeras/autokeras_imdb_example']},
{'FastAI': ['guides/frameworks/fastai/fastai_with_tensorboard']},
'guides/frameworks/fastai/fastai_with_tensorboard',
{'Keras': ['guides/frameworks/keras/jupyter', 'guides/frameworks/keras/keras_tensorboard']},
{'LightGBM': ['guides/frameworks/lightgbm/lightgbm_example']},
{'Matplotlib': ['guides/frameworks/matplotlib/matplotlib_example']},
{'MegEngine':['guides/frameworks/megengine/megengine_mnist']},
'guides/frameworks/lightgbm/lightgbm_example',
'guides/frameworks/matplotlib/matplotlib_example',
'guides/frameworks/megengine/megengine_mnist',
{'PyTorch':
['guides/frameworks/pytorch/pytorch_distributed_example', 'guides/frameworks/pytorch/pytorch_matplotlib',
'guides/frameworks/pytorch/pytorch_mnist', 'guides/frameworks/pytorch/pytorch_tensorboard', 'guides/frameworks/pytorch/pytorch_tensorboardx',
@ -85,14 +85,14 @@ module.exports = {
]
},
{'PyTorch Ignite': ['guides/frameworks/pytorch ignite/integration_pytorch_ignite', 'guides/frameworks/pytorch ignite/pytorch_ignite_mnist']},
{'PyTorch Lightning': ['guides/frameworks/pytorch_lightning/pytorch_lightning_example']},
'guides/frameworks/pytorch_lightning/pytorch_lightning_example',
{'Scikit-Learn': ['guides/frameworks/scikit-learn/sklearn_joblib_example', 'guides/frameworks/scikit-learn/sklearn_matplotlib_example']},
{'TensorBoardX': ['guides/frameworks/tensorboardx/tensorboardx', "guides/frameworks/tensorboardx/video_tensorboardx"]},
{
'Tensorflow': ['guides/frameworks/tensorflow/tensorboard_pr_curve', 'guides/frameworks/tensorflow/tensorboard_toy',
'guides/frameworks/tensorflow/tensorflow_mnist', 'guides/frameworks/tensorflow/integration_keras_tuner']
},
{'XGboost': ['guides/frameworks/xgboost/xgboost_sample']}
'guides/frameworks/xgboost/xgboost_sample'
]},
{'IDEs': ['guides/ide/remote_jupyter_tutorial', 'guides/ide/integration_pycharm', 'guides/ide/google_colab']},
{'Offline Mode':['guides/set_offline']},