Small edits (#1046)
Some checks failed
CI / build (push) Has been cancelled

This commit is contained in:
pollfly
2025-02-20 09:32:26 +02:00
committed by GitHub
parent 60638ce467
commit 15416bb43d
8 changed files with 40 additions and 40 deletions

View File

@@ -3,8 +3,8 @@ title: ClearML Modules
---
- [**ClearML Python Package**](../getting_started/ds/ds_first_steps.md#install-clearml) (`clearml`) for integrating ClearML into your existing code-base.
- [**ClearML Server**](../deploying_clearml/clearml_server.md) (`clearml-server`) for storing experiment, model, and workflow data, and supporting the Web UI experiment manager. It is also the control plane for the MLOps.
- [**ClearML Agent**](../clearml_agent.md) (`clearml-agent`), the MLOps orchestration agent. Enabling experiment and workflow reproducibility, and scalability.
- [**ClearML Server**](../deploying_clearml/clearml_server.md) (`clearml-server`) for storing task, model, and workflow data, and supporting the Web UI experiment manager. It is also the control plane for the MLOps.
- [**ClearML Agent**](../clearml_agent.md) (`clearml-agent`), the MLOps orchestration agent. Enabling task and workflow reproducibility, and scalability.
- [**ClearML Data**](../clearml_data/clearml_data.md) (`clearml-data`) data management and versioning on top of file-systems/object-storage.
- [**ClearML Serving**](../clearml_serving/clearml_serving.md) (`clearml-serving`) for model deployment and orchestration.
- [**ClearML Session**](../apps/clearml_session.md) (`clearml-session`) for launching remote instances of Jupyter Notebooks and VSCode.

View File

@@ -44,7 +44,7 @@ pip install clearml
CLEARML_CONFIG_FILE = MyOtherClearML.conf
```
For more information about running experiments inside Docker containers, see [ClearML Agent Deployment](../../clearml_agent/clearml_agent_deployment.md)
For more information about running tasks inside Docker containers, see [ClearML Agent Deployment](../../clearml_agent/clearml_agent_deployment.md)
and [ClearML Agent Reference](../../clearml_agent/clearml_agent_ref.md).
</Collapsible>

View File

@@ -2,14 +2,14 @@
title: Next Steps
---
So, you've already [installed ClearML's Python package](ds_first_steps.md) and run your first experiment!
So, you've already [installed ClearML's Python package](ds_first_steps.md) and run your first task!
Now, you'll learn how to track Hyperparameters, Artifacts, and Metrics!
## Accessing Experiments
## Accessing Tasks
Every previously executed experiment is stored as a Task.
A Task's project and name can be changed after the experiment has been executed.
A Task's project and name can be changed after it has been executed.
A Task is also automatically assigned an auto-generated unique identifier (UUID string) that cannot be changed and always locates the same Task in the system.
Retrieve a Task object programmatically by querying the system based on either the Task ID,
@@ -23,8 +23,8 @@ Once you have a Task object you can query the state of the Task, get its model(s
## Log Hyperparameters
For full reproducibility, it's paramount to save hyperparameters for each experiment. Since hyperparameters can have substantial impact
on model performance, saving and comparing these between experiments is sometimes the key to understanding model behavior.
For full reproducibility, it's paramount to save each task's hyperparameters. Since hyperparameters can have substantial impact
on model performance, saving and comparing them between tasks is sometimes the key to understanding model behavior.
ClearML supports logging `argparse` module arguments out of the box, so once ClearML is integrated into the code, it automatically logs all parameters provided to the argument parser.
@@ -40,7 +40,7 @@ See [Configuration](../../clearml_sdk/task_sdk.md#configuration) for all hyperpa
## Log Artifacts
ClearML lets you easily store the output products of an experiment - Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
ClearML lets you easily store the output products of a task: Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
Essentially, artifacts are files (or Python objects) uploaded from a script and are stored alongside the Task.
These artifacts can be easily accessed by the web UI or programmatically.
@@ -107,9 +107,9 @@ task = Task.init(
)
```
Now, whenever the framework (TensorFlow/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the experiment.
Now, whenever the framework (TensorFlow/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the task.
Loading models by a framework is also logged by the system; these models appear in an experiment's **Artifacts** tab,
Loading models by a framework is also logged by the system; these models appear in a task's **Artifacts** tab,
under the "Input Models" section.
Check out model snapshots examples for [TensorFlow](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py),
@@ -149,7 +149,7 @@ You can log everything, from time series data and confusion matrices to HTML, Au
Once everything is neatly logged and displayed, use the [comparison tool](../../webapp/webapp_exp_comparing.md) to find the best configuration!
## Track Experiments
## Track Tasks
The task table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
@@ -163,13 +163,13 @@ You can filter and sort based on parameters and metrics, so creating custom view
Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.
It can also be used as a live leaderboard, showing the best performing experiments' status, updated in real time.
It can also be used as a live leaderboard, showing the best performing tasks' status, updated in real time.
This is helpful to monitor your projects' progress, and to share it across the organization.
Any page is sharable by copying the URL from the address bar, allowing you to bookmark leaderboards or to send an exact view of a specific experiment or a comparison page.
Any page is sharable by copying the URL from the address bar, allowing you to bookmark leaderboards or to send an exact view of a specific task or a comparison page.
You can also tag Tasks for visibility and filtering allowing you to add more information on the execution of the experiment.
Later you can search based on task name in the search bar, and filter experiments based on their tags, parameters, status, and more.
You can also tag Tasks for visibility and filtering allowing you to add more information on the execution of the task.
Later you can search based on task name in the search bar, and filter tasks based on their tags, parameters, status, and more.
## What's Next?
@@ -181,7 +181,7 @@ or check these pages out:
- Scale you work and deploy [ClearML Agents](../../clearml_agent.md)
- Develop on remote machines with [ClearML Session](../../apps/clearml_session.md)
- Structure your work and put it into [Pipelines](../../pipelines/pipelines.md)
- Improve your experiments with [Hyperparameter Optimization](../../fundamentals/hpo.md)
- Improve your tasks with [Hyperparameter Optimization](../../fundamentals/hpo.md)
- Check out ClearML's integrations with your favorite ML frameworks like [TensorFlow](../../integrations/tensorflow.md),
[PyTorch](../../integrations/pytorch.md), [Keras](../../integrations/keras.md),
and more

View File

@@ -109,10 +109,10 @@ Want a more in depth introduction to ClearML? Choose where you want to get start
- [Track and upload](../fundamentals/task.md) metrics and models with only 2 lines of code
- [Reproduce](../webapp/webapp_exp_reproducing.md) tasks with 3 mouse clicks
- [Create bots](../guides/services/slack_alerts.md) that send you Slack messages based on experiment behavior (for example,
- [Create bots](../guides/services/slack_alerts.md) that send you Slack messages based on task behavior (for example,
alert you whenever your model improves in accuracy)
- Manage your [data](../clearml_data/clearml_data.md) - store, track, and version control
- Remotely execute experiments on any compute resource you have available with [ClearML Agent](../clearml_agent.md)
- Remotely execute tasks on any compute resource you have available with [ClearML Agent](../clearml_agent.md)
- Automatically scale cloud instances according to your resource needs with ClearML's
[AWS Autoscaler](../webapp/applications/apps_aws_autoscaler.md) and [GCP Autoscaler](../webapp/applications/apps_gcp_autoscaler.md)
GUI applications