Small edits (#476)

This commit is contained in:
pollfly
2023-02-16 12:17:53 +02:00
committed by GitHub
parent 5458f8036b
commit 2cf096f7ec
27 changed files with 64 additions and 64 deletions

View File

@@ -22,19 +22,19 @@ During early stages of model development, while code is still being modified hea
the model and ensure that you choose a model that makes sense, and the training procedure works. Can be used to provide initial models for testing.
The abovementioned setups might be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome!
The goal of this phase is to get a code, dataset and environment setup, so we can start digging to find the best model!
The goal of this phase is to get a code, dataset, and environment setup, so you can start digging to find the best model!
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out our [getting started](ds_first_steps.md)).
This helps visualizing the results and tracking progress.
- [ClearML Agent](../../clearml_agent.md) helps moving your work to other machines without the hassle of rebuilding the environment every time,
while also creating an easy queue interface that easily allows you to just drop your experiments to be executed one by one
while also creating an easy queue interface that easily lets you just drop your experiments to be executed one by one
(great for ensuring that the GPUs are churning during the weekend).
- [ClearML Session](../../apps/clearml_session.md) helps with developing on remote machines, just like you'd develop on you local laptop!
## Train Remotely
In this phase, we scale our training efforts, and try to come up with the best code / parameter / data combination that
yields the best performing model for our task!
In this phase, you scale your training efforts, and try to come up with the best code / parameter / data combination that
yields the best performing model for your task!
- The real training (usually) should **not** be executed on your development machine.
- Training sessions should be launched and monitored from a web UI.
@@ -55,8 +55,8 @@ that we need.
## Track EVERYTHING
We believe that you should track everything! From obscure parameters to weird metrics, it's impossible to know what will end up
improving our results later on!
Track everything--from obscure parameters to weird metrics, it's impossible to know what will end up
improving your results later on!
- Make sure experiments are reproducible! ClearML logs code, parameters, environment in a single, easily searchable place.
- Development is not linear. Configuration / Parameters should not be stored in your git, as

View File

@@ -2,7 +2,7 @@
title: Next Steps
---
So, we've already [installed ClearML's python package](ds_first_steps.md) and ran our first experiment!
So, you've already [installed ClearML's python package](ds_first_steps.md) and run your first experiment!
Now, we'll learn how to track Hyperparameters, Artifacts and Metrics!
@@ -19,7 +19,7 @@ or project & name combination. It's also possible to query tasks based on their
prev_task = Task.get_task(task_id='123456deadbeef')
```
Once we have a Task object we can query the state of the Task, get its Model, scalars, parameters, etc.
Once you have a Task object you can query the state of the Task, get its model, scalars, parameters, etc.
## Log Hyperparameters
@@ -40,7 +40,7 @@ Check [this](../../fundamentals/hyperparameters.md) out for all Hyperparameter l
## Log Artifacts
ClearML allows you to easily store the output products of an experiment - Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
ClearML lets you easily store the output products of an experiment - Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
Essentially, artifacts are files (or python objects) uploaded from a script and are stored alongside the Task.
These Artifacts can be easily accessed by the web UI or programmatically.
@@ -56,12 +56,12 @@ Uploading a local file containing the preprocessed results of the data:
task.upload_artifact('/path/to/preprocess_data.csv', name='data')
```
We can also upload an entire folder with all its content by passing the folder (the folder will be zipped and uploaded as a single zip file).
You can also upload an entire folder with all its content by passing the folder (the folder will be zipped and uploaded as a single zip file).
```python
task.upload_artifact('/path/to/folder/', name='folder')
```
Lastly, we can upload an instance of an object; Numpy/Pandas/PIL Images are supported with npz/csv.gz/jpg formats accordingly.
Lastly, you can upload an instance of an object; Numpy/Pandas/PIL Images are supported with npz/csv.gz/jpg formats accordingly.
If the object type is unknown ClearML pickles it and uploads the pickle file.
```python
@@ -128,11 +128,11 @@ local_weights_path = last_snapshot.get_local_copy()
Like before we have to get the instance of the Task training the original weights files, then we can query the task for its output models (a list of snapshots), and get the latest snapshot.
:::note
Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing our requested snapshot.
Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
:::
As with Artifacts, all models are cached, meaning the next time we run this code, no model needs to be downloaded.
Once one of the frameworks will load the weights file, the running Task will be automatically updated with “Input Model” pointing directly to the original training Tasks Model.
This feature allows you to easily get a full genealogy of every trained and used model by your system!
This feature lets you easily get a full genealogy of every trained and used model by your system!
## Log Metrics

View File

@@ -17,11 +17,11 @@ If you are afraid of clutter, use the archive option, and set up your own [clean
These metrics can later be part of your own in-house monitoring solution, don't let good data go to waste :)
## Clone Tasks
In order to define a Task in ClearML we have two options
Define a ClearML Task with one of the following options:
- Run the actual code with `Task.init` call. This will create and auto-populate the Task in CleaML (including Git Repo / Python Packages / Command line etc.).
- Register local / remote code repository with `clearml-task`. See [details](../../apps/clearml_task.md).
Once we have a Task in ClearML, we can clone and edit its definitions in the UI, then launch it on one of our nodes with [ClearML Agent](../../clearml_agent.md).
Once you have a Task in ClearML, you can clone and edit its definitions in the UI, then launch it on one of your nodes with [ClearML Agent](../../clearml_agent.md).
## Advanced Automation
- Create daily / weekly cron jobs for retraining best performing models on.

View File

@@ -164,7 +164,7 @@ and [pipeline](../../pipelines/pipelines.md) solutions.
Logging models into the model repository is the easiest way to integrate the development process directly with production.
Any model stored by a supported framework (Keras / TensorFlow / PyTorch / Joblib etc.) will be automatically logged into ClearML.
ClearML also offers methods to explicitly log models. Models can be automatically stored on a preferred storage medium
ClearML also supports methods to explicitly log models. Models can be automatically stored on a preferred storage medium
(s3 bucket, google storage, etc.).
#### Log Metrics
@@ -208,7 +208,7 @@ tasks = Task.get_tasks(
Data is probably one of the biggest factors that determines the success of a project. Associating a models data with
the model's configuration, code, and results (such as accuracy) is key to deducing meaningful insights into model behavior.
[ClearML Data](../../clearml_data/clearml_data.md) allows you to version your data, so it's never lost, fetch it from every
[ClearML Data](../../clearml_data/clearml_data.md) lets you version your data, so it's never lost, fetch it from every
machine with minimal code changes, and associate data to experiment results.
Logging data can be done via command line, or programmatically. If any preprocessing code is involved, ClearML logs it

View File

@@ -16,19 +16,19 @@ The sections below describe the following scenarios:
## Building Tasks
### Dataset Creation
Let's assume we have some code that extracts data from a production database into a local folder.
Our goal is to create an immutable copy of the data to be used by further steps:
Let's assume you have some code that extracts data from a production database into a local folder.
Your goal is to create an immutable copy of the data to be used by further steps:
```bash
clearml-data create --project data --name dataset
clearml-data sync --folder ./from_production
```
We could also add a tag `latest` to the Dataset, marking it as the latest version.
You can add a tag `latest` to the Dataset, marking it as the latest version.
### Preprocessing Data
The second step is to preprocess the data. First we need to access it, then we want to modify it,
and lastly we want to create a new version of the data.
The second step is to preprocess the data. First access the data, then modify it,
and lastly create a new version of the data.
```python
# create a task for the data processing part
@@ -59,10 +59,10 @@ dataset.tags = []
new_dataset.tags = ['latest']
```
We passed the `parents` argument when we created v2 of the Dataset, which inherits all the parent's version content.
This not only helps trace back dataset changes with full genealogy, but also makes our storage more efficient,
The new dataset inherits the contents of the datasets specified in `Dataset.create`'s `parents` argument.
This not only helps trace back dataset changes with full genealogy, but also makes the storage more efficient,
since it only stores the changed and / or added files from the parent versions.
When we access the Dataset, it automatically merges the files from all parent versions
When you access the Dataset, it automatically merges the files from all parent versions
in a fully automatic and transparent process, as if the files were always part of the requested Dataset.
### Training