Small edits (#787)

This commit is contained in:
pollfly 2024-03-05 12:00:32 +02:00 committed by GitHub
parent f516d08c71
commit 1c45124714
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 20 additions and 18 deletions

View File

@ -53,15 +53,15 @@ See all [storage capabilities](../../integrations/storage.md).
Upload a local file containing the preprocessed results of the data:
```python
task.upload_artifact('/path/to/preprocess_data.csv', name='data')
task.upload_artifact(name='data', artifact_object='/path/to/preprocess_data.csv')
```
You can also upload an entire folder with all its content by passing the folder (the folder will be zipped and uploaded as a single zip file).
```python
task.upload_artifact('/path/to/folder/', name='folder')
task.upload_artifact(name='folder', artifact_object='/path/to/folder/')
```
Lastly, you can upload an instance of an object; Numpy/Pandas/PIL Images are supported with npz/csv.gz/jpg formats accordingly.
Lastly, you can upload an instance of an object; Numpy/Pandas/PIL Images are supported with `npz`/`csv.gz`/`jpg` formats accordingly.
If the object type is unknown, ClearML pickles it and uploads the pickle file.
```python
@ -69,7 +69,7 @@ numpy_object = np.eye(100, 100)
task.upload_artifact(name='features', artifact_object=numpy_object)
```
Check out all [artifact logging](../../clearml_sdk/task_sdk.md#artifacts) options.
For more artifact logging options, see [Artifacts](../../clearml_sdk/task_sdk.md#artifacts).
### Using Artifacts
@ -137,26 +137,26 @@ This feature lets you easily get a full genealogy of every trained and used mode
## Log Metrics
Full metrics logging is the key to finding the best performing model!
By default, everything that's reported to TensorBoard and Matplotlib is automatically captured and logged.
By default, ClearML automatically captures and logs everything reported to TensorBoard and Matplotlib.
Since not all metrics are tracked that way, you can also manually report metrics using a [`Logger`](../../fundamentals/logger.md) object.
You can log everything, from time series data to confusion matrices to HTML, Audio and Video, to custom plotly graphs! Everything goes!
You can log everything, from time series data and confusion matrices to HTML, Audio, and Video, to custom plotly graphs! Everything goes!
![image](../../img/report_plotly.png)
![Experiment plots](../../img/report_plotly.png)
Once everything is neatly logged and displayed, using the [comparison tool](../../webapp/webapp_exp_comparing.md) makes it easy to find the best configuration!
Once everything is neatly logged and displayed, use the [comparison tool](../../webapp/webapp_exp_comparing.md) to find the best configuration!
## Track Experiments
The experiment table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
The experiments table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
![image](../../img/webapp_experiment_table.png)
![Experiments table](../../img/webapp_experiment_table.png)
### Creating Leaderboards
Customize the [experiments table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics and tags.
Customize the [experiments table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics, and tags.
You can filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.

View File

@ -10,8 +10,8 @@ Metadata can be customized as needed using: **meta** dictionaries:
### Adding Frame Metadata
When instantiating a Frame, metadata that applies for the entire frame can be
added as an argument.
When instantiating a `SingleFrame`, metadata that applies to the entire frame can be
added as an argument:
```python
from allegroai import SingleFrame
@ -30,11 +30,13 @@ frame.metadata['dangerous'] = 'no'
### Adding ROI Metadata
Metadata can be added to individual ROIs when adding an annotation to a `frame`, using the [`SingleFrame.add_annotation`](../references/hyperdataset/singleframe.md#add_annotation)
method.
Metadata can be added to individual ROIs when adding an annotation to a `frame`, using [`SingleFrame.add_annotation()`](../references/hyperdataset/singleframe.md#add_annotation):
```python
frame.add_annotation(box2d_xywh=(10, 10, 30, 20), labels=['tiger'],
# insert metadata dictionary
metadata={'dangerous':'yes'})
frame.add_annotation(
box2d_xywh=(10, 10, 30, 20),
labels=['tiger'],
# insert metadata dictionary
metadata={'dangerous':'yes'}
)
```