Small edits (#476)

This commit is contained in:
pollfly
2023-02-16 12:17:53 +02:00
committed by GitHub
parent 5458f8036b
commit 2cf096f7ec
27 changed files with 64 additions and 64 deletions

View File

@@ -22,19 +22,19 @@ During early stages of model development, while code is still being modified hea
the model and ensure that you choose a model that makes sense, and the training procedure works. Can be used to provide initial models for testing.
The abovementioned setups might be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome!
The goal of this phase is to get a code, dataset and environment setup, so we can start digging to find the best model!
The goal of this phase is to get a code, dataset, and environment setup, so you can start digging to find the best model!
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out our [getting started](ds_first_steps.md)).
This helps visualizing the results and tracking progress.
- [ClearML Agent](../../clearml_agent.md) helps moving your work to other machines without the hassle of rebuilding the environment every time,
while also creating an easy queue interface that easily allows you to just drop your experiments to be executed one by one
while also creating an easy queue interface that easily lets you just drop your experiments to be executed one by one
(great for ensuring that the GPUs are churning during the weekend).
- [ClearML Session](../../apps/clearml_session.md) helps with developing on remote machines, just like you'd develop on you local laptop!
## Train Remotely
In this phase, we scale our training efforts, and try to come up with the best code / parameter / data combination that
yields the best performing model for our task!
In this phase, you scale your training efforts, and try to come up with the best code / parameter / data combination that
yields the best performing model for your task!
- The real training (usually) should **not** be executed on your development machine.
- Training sessions should be launched and monitored from a web UI.
@@ -55,8 +55,8 @@ that we need.
## Track EVERYTHING
We believe that you should track everything! From obscure parameters to weird metrics, it's impossible to know what will end up
improving our results later on!
Track everything--from obscure parameters to weird metrics, it's impossible to know what will end up
improving your results later on!
- Make sure experiments are reproducible! ClearML logs code, parameters, environment in a single, easily searchable place.
- Development is not linear. Configuration / Parameters should not be stored in your git, as

View File

@@ -2,7 +2,7 @@
title: Next Steps
---
So, we've already [installed ClearML's python package](ds_first_steps.md) and ran our first experiment!
So, you've already [installed ClearML's python package](ds_first_steps.md) and run your first experiment!
Now, we'll learn how to track Hyperparameters, Artifacts and Metrics!
@@ -19,7 +19,7 @@ or project & name combination. It's also possible to query tasks based on their
prev_task = Task.get_task(task_id='123456deadbeef')
```
Once we have a Task object we can query the state of the Task, get its Model, scalars, parameters, etc.
Once you have a Task object you can query the state of the Task, get its model, scalars, parameters, etc.
## Log Hyperparameters
@@ -40,7 +40,7 @@ Check [this](../../fundamentals/hyperparameters.md) out for all Hyperparameter l
## Log Artifacts
ClearML allows you to easily store the output products of an experiment - Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
ClearML lets you easily store the output products of an experiment - Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
Essentially, artifacts are files (or python objects) uploaded from a script and are stored alongside the Task.
These Artifacts can be easily accessed by the web UI or programmatically.
@@ -56,12 +56,12 @@ Uploading a local file containing the preprocessed results of the data:
task.upload_artifact('/path/to/preprocess_data.csv', name='data')
```
We can also upload an entire folder with all its content by passing the folder (the folder will be zipped and uploaded as a single zip file).
You can also upload an entire folder with all its content by passing the folder (the folder will be zipped and uploaded as a single zip file).
```python
task.upload_artifact('/path/to/folder/', name='folder')
```
Lastly, we can upload an instance of an object; Numpy/Pandas/PIL Images are supported with npz/csv.gz/jpg formats accordingly.
Lastly, you can upload an instance of an object; Numpy/Pandas/PIL Images are supported with npz/csv.gz/jpg formats accordingly.
If the object type is unknown ClearML pickles it and uploads the pickle file.
```python
@@ -128,11 +128,11 @@ local_weights_path = last_snapshot.get_local_copy()
Like before we have to get the instance of the Task training the original weights files, then we can query the task for its output models (a list of snapshots), and get the latest snapshot.
:::note
Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing our requested snapshot.
Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
:::
As with Artifacts, all models are cached, meaning the next time we run this code, no model needs to be downloaded.
Once one of the frameworks will load the weights file, the running Task will be automatically updated with “Input Model” pointing directly to the original training Tasks Model.
This feature allows you to easily get a full genealogy of every trained and used model by your system!
This feature lets you easily get a full genealogy of every trained and used model by your system!
## Log Metrics