mirror of
https://github.com/clearml/clearml-docs
synced 2025-03-03 02:32:49 +00:00
Small edits (#779)
This commit is contained in:
parent
15ac5c2ce6
commit
6fb11e8e0d
@ -757,11 +757,11 @@ Build a Docker container according to the execution environment of a specific ta
|
||||
clearml-agent build --id <task-id> --docker --target <new-docker-name>
|
||||
```
|
||||
|
||||
It's possible to add the Docker container as the base Docker image to a task (experiment), using one of the following methods:
|
||||
You can add the Docker container as the base Docker image to a task (experiment), using one of the following methods:
|
||||
|
||||
- Using the **ClearML Web UI** - See [Base Docker image](webapp/webapp_exp_tuning.md#base-docker-image) on the "Tuning
|
||||
Experiments" page.
|
||||
- In the ClearML configuration file - Use the ClearML configuration file [agent.default_docker](configs/clearml_conf.md#agentdefault_docker)
|
||||
- In the ClearML configuration file - Use the ClearML configuration file [`agent.default_docker`](configs/clearml_conf.md#agentdefault_docker)
|
||||
options.
|
||||
|
||||
Check out [this tutorial](guides/clearml_agent/exp_environment_containers.md) for building a Docker container
|
||||
|
@ -36,7 +36,7 @@ lineage and content information. See [dataset UI](../webapp/datasets/webapp_data
|
||||
|
||||
## Setup
|
||||
|
||||
`clearml-data` comes built-in with the `clearml` python package! Just check out the [Getting Started](../getting_started/ds/ds_first_steps.md)
|
||||
`clearml-data` comes built-in with the `clearml` python package! Check out the [Getting Started](../getting_started/ds/ds_first_steps.md)
|
||||
guide for more info!
|
||||
|
||||
## Using ClearML Data
|
||||
|
@ -103,7 +103,7 @@ clearml-data remove [-h] [--id ID] [--files [FILES [FILES ...]]]
|
||||
|
||||
## upload
|
||||
|
||||
Upload the local dataset changes to the server. By default, it's uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md). It's possible to specify a different storage
|
||||
Upload the local dataset changes to the server. By default, it's uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md). You can specify a different storage
|
||||
medium by entering an upload destination, such as `s3://bucket`, `gs://`, `azure://`, `/mnt/shared/`.
|
||||
|
||||
```bash
|
||||
|
@ -29,7 +29,7 @@ the needed files.
|
||||
New dataset created id=24d05040f3e14fbfbed8edb1bf08a88c
|
||||
```
|
||||
|
||||
1. Now let's add a folder. File addition is recursive, so it's enough to point at the folder
|
||||
1. Add a folder. File addition is recursive, so it's enough to point at the folder
|
||||
to captures all files and sub-folders:
|
||||
|
||||
```bash
|
||||
|
@ -171,7 +171,7 @@ In order to mitigate the clutter that a multitude of debugging tasks might creat
|
||||
the ClearML configuration reference)
|
||||
* The previous task execution did not have any artifacts / models
|
||||
|
||||
It's possible to always create a new task by passing `reuse_last_task_id=False`.
|
||||
You can always create a new task by passing `reuse_last_task_id=False`.
|
||||
|
||||
See full `Task.init` reference [here](../references/sdk/task.md#taskinit).
|
||||
|
||||
@ -267,7 +267,7 @@ For example:
|
||||
a_task = Task.get_task(project_name='examples', task_name='artifacts')
|
||||
```
|
||||
|
||||
Once a task object is obtained, it's possible to query the state of the task, reported scalars, etc.
|
||||
Once a task object is obtained, you can query the state of the task, reported scalars, etc.
|
||||
The task's outputs, such as artifacts and models, can also be retrieved.
|
||||
|
||||
## Querying / Searching Tasks
|
||||
@ -708,7 +708,7 @@ local_csv = preprocess_task.artifacts['data'].get_local_copy()
|
||||
See more details in the [Using Artifacts example](https://github.com/allegroai/clearml/blob/master/examples/reporting/using_artifacts_example.py).
|
||||
|
||||
## Models
|
||||
The following is an overview of working with models through a `Task` object. It is also possible to work directly with model
|
||||
The following is an overview of working with models through a `Task` object. You can also work directly with model
|
||||
objects (see [Models (SDK)](model_sdk.md)).
|
||||
|
||||
### Logging Models Manually
|
||||
@ -737,7 +737,7 @@ The snapshots of manually uploaded models aren't automatically captured. To upda
|
||||
task.update_output_model(model_path='path/to/model')
|
||||
```
|
||||
|
||||
It's possible to modify the following parameters:
|
||||
You can modify the following parameters:
|
||||
* Model location
|
||||
* Model name
|
||||
* Model description
|
||||
|
@ -105,7 +105,7 @@ You can set up Kubernetes' cluster autoscaler to work with your cloud providers,
|
||||
your Kubernetes cluster as needed; increasing the amount of nodes when there aren't enough to execute pods and removing
|
||||
underutilized nodes. See [charts](https://github.com/kubernetes/autoscaler/tree/master/charts) for specific cloud providers.
|
||||
|
||||
:::note Enterprise features
|
||||
:::important Enterprise features
|
||||
The ClearML Enterprise plan supports K8S servicing multiple ClearML queues, as well as providing a pod template for each
|
||||
queue for describing the resources for each pod to use. See [ClearML Helm Charts](https://github.com/allegroai/clearml-helm-charts/tree/main).
|
||||
:::
|
||||
|
@ -57,7 +57,7 @@ help maintainers reproduce the problem:
|
||||
* **Provide specific examples to demonstrate the steps.** Include links to files or GitHub projects, or copy / paste snippets which you use in those examples.
|
||||
* **If you are reporting any ClearML crash,** include a crash report with a stack trace from the operating system. Make
|
||||
sure to add the crash report in the issue and place it in a [code block](https://docs.github.com/en/github/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks),
|
||||
a [file attachment](https://help.github.com/articles/file-attachments-on-issues-and-pull-requests), or just put it in
|
||||
a [file attachment](https://help.github.com/articles/file-attachments-on-issues-and-pull-requests), or put it in
|
||||
a [gist](https://gist.github.com) (and provide a link to that gist).
|
||||
* **Describe the behavior you observed after following the steps** and the exact problem with that behavior.
|
||||
* **Explain which behavior you expected to see and why.**
|
||||
|
@ -413,7 +413,7 @@ ___
|
||||
|
||||
**`agent.match_rules`** (*[dict]*)
|
||||
|
||||
:::note Enterprise Feature
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan
|
||||
:::
|
||||
|
||||
@ -1437,7 +1437,7 @@ sdk {
|
||||
|
||||
## Configuration Vault
|
||||
|
||||
:::note Enterprise Feature
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan
|
||||
:::
|
||||
|
||||
|
@ -382,7 +382,7 @@ options.
|
||||
|
||||
### Custom UI Context Menu Actions
|
||||
|
||||
:::note Enterprise Feature
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan
|
||||
:::
|
||||
|
||||
|
12
docs/faq.md
12
docs/faq.md
@ -204,10 +204,10 @@ See server upgrade instructions for any of the available formats:
|
||||
|
||||
#### Can I log input and output models manually? <a id="manually-log-models"></a>
|
||||
|
||||
Yes! Use the [`InputModel.import_model`](references/sdk/model_inputmodel.md#inputmodelimport_model)
|
||||
and [`Task.connect`](references/sdk/task.md#connect) methods to manually connect an input model. Use the
|
||||
[`OutputModel.update_weights`](references/sdk/model_outputmodel.md#update_weights)
|
||||
method to manually connect a model weights file.
|
||||
Yes! Use [`InputModel.import_model()`](references/sdk/model_inputmodel.md#inputmodelimport_model)
|
||||
and [`Task.connect()`](references/sdk/task.md#connect) to connect an input model. Use
|
||||
[`OutputModel.update_weights()`](references/sdk/model_outputmodel.md#update_weights)
|
||||
to connect a model weights file.
|
||||
|
||||
```python
|
||||
input_model = InputModel.import_model(link_to_initial_model_file)
|
||||
@ -801,8 +801,8 @@ To fix this issue, you could import the `time` package and add a `time.sleep(20)
|
||||
|
||||
#### Can I use ClearML with scikit-learn? <a id="use-scikit-learn"></a>
|
||||
|
||||
Yes! `scikit-learn` is supported. Everything you do is logged. ClearML automatically logs models which are stored using `joblib`.
|
||||
See the scikit-learn examples with [Matplotlib](guides/frameworks/scikit-learn/sklearn_matplotlib_example.md) and [Joblib](guides/frameworks/scikit-learn/sklearn_joblib_example.md).
|
||||
Yes! `scikit-learn` is supported. ClearML automatically logs models which are stored using `joblib`.
|
||||
For more information, see [scikit-learn](integrations/scikit_learn.md).
|
||||
|
||||
## ClearML Configuration
|
||||
|
||||
|
@ -11,7 +11,7 @@ hyperparameters and results can be saved and compared, which is key to understan
|
||||
|
||||
ClearML lets you easily try out different hyperparameter values without changing your original code. ClearML's [execution
|
||||
agent](../clearml_agent.md) will override the original values with any new ones you specify through the web UI (see
|
||||
[Configuration](../webapp/webapp_exp_tuning.md#configuration) in the Tuning Experiments page). It's also possible to
|
||||
[Configuration](../webapp/webapp_exp_tuning.md#configuration) in the Tuning Experiments page). You can also
|
||||
programmatically set experiment parameters.
|
||||
|
||||
## Tracking Hyperparameters
|
||||
@ -50,7 +50,7 @@ parameter specifying parameters to log.
|
||||
log_os_environments: ["AWS_*", "CUDA_VERSION"]
|
||||
```
|
||||
|
||||
It's also possible to specify environment variables using the `CLEARML_LOG_ENVIRONMENT` variable.
|
||||
You can also specify environment variables using the `CLEARML_LOG_ENVIRONMENT` variable.
|
||||
|
||||
:::note Overriding clearml.conf
|
||||
The `CLEARML_LOG_ENVIRONMENT` always overrides the `clearml.conf` file.
|
||||
|
@ -23,7 +23,7 @@ how to group tasks, though different models or objectives are usually grouped in
|
||||
Tasks can be accessed and utilized with code. [Access a task](../clearml_sdk/task_sdk.md#accessing-tasks) by
|
||||
specifying project name and task name combination or by a unique ID.
|
||||
|
||||
It's possible to create copies of a task ([clone](../webapp/webapp_exp_reproducing.md)) then execute them with
|
||||
You can create copies of a task ([clone](../webapp/webapp_exp_reproducing.md)) then execute them with
|
||||
[ClearML Agent](../clearml_agent.md). When an agent executes a task, it uses the specified configuration to:
|
||||
|
||||
* Install required Python packages
|
||||
@ -60,7 +60,7 @@ The captured [execution output](../webapp/webapp_exp_track_visual.md#experiment-
|
||||
* [Debug samples](../webapp/webapp_exp_track_visual.md#debug-samples)
|
||||
* [Models](artifacts.md)
|
||||
|
||||
To view a more in depth description of each task section, see [Tracking Experiments and Visualizing Results](../webapp/webapp_exp_track_visual.md).
|
||||
For a more in-depth description of each task section, see [Tracking Experiments and Visualizing Results](../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
### Execution Configuration
|
||||
ClearML logs a task's hyperparameters specified as command line arguments, environment or code level variables. This
|
||||
@ -115,7 +115,7 @@ they are attached to, and then retrieving the artifact with one of its following
|
||||
See more details in the [Using Artifacts example](https://github.com/allegroai/clearml/blob/master/examples/reporting/using_artifacts_example.py).
|
||||
|
||||
## Task Types
|
||||
Tasks have a *type* attribute, which denotes their purpose (e.g. training / testing / data processing). This helps to further
|
||||
Tasks have a *type* attribute, which denotes their purpose. This helps to further
|
||||
organize projects and ensure tasks are easy to [search and find](../clearml_sdk/task_sdk.md#querying--searching-tasks).
|
||||
Available task types are:
|
||||
* *training* (default) - Training a model
|
||||
|
@ -27,9 +27,9 @@ The goal of this phase is to get a code, dataset, and environment set up, so you
|
||||
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [Getting Started](ds_first_steps.md)).
|
||||
This helps visualizing the results and tracking progress.
|
||||
- [ClearML Agent](../../clearml_agent.md) helps moving your work to other machines without the hassle of rebuilding the environment every time,
|
||||
while also creating an easy queue interface that easily lets you just drop your experiments to be executed one by one
|
||||
while also creating an easy queue interface that easily lets you drop your experiments to be executed one by one
|
||||
(great for ensuring that the GPUs are churning during the weekend).
|
||||
- [ClearML Session](../../apps/clearml_session.md) helps with developing on remote machines, just like you'd develop on your local laptop!
|
||||
- [ClearML Session](../../apps/clearml_session.md) helps with developing on remote machines, in the same way that you'd develop on your local laptop!
|
||||
|
||||
## Train Remotely
|
||||
|
||||
@ -66,7 +66,7 @@ improving your results later on!
|
||||
|
||||
## Visibility Matters
|
||||
|
||||
While it's possible to track experiments with one tool, and pipeline them with another, having
|
||||
While you can track experiments with one tool, and pipeline them with another, having
|
||||
everything under the same roof has its benefits!
|
||||
|
||||
Being able to track experiment progress and compare experiments, and, based on that, send experiments to execution on remote
|
||||
|
@ -12,8 +12,8 @@ Every previously executed experiment is stored as a Task.
|
||||
A Task's project and name can be changed after the experiment has been executed.
|
||||
A Task is also automatically assigned an auto-generated unique identifier (UUID string) that cannot be changed and always locates the same Task in the system.
|
||||
|
||||
It's possible to retrieve a Task object programmatically by querying the system based on either the Task ID,
|
||||
or project and name combination. It's also possible to query tasks based on their properties, like tags (see [Querying Tasks](../../clearml_sdk/task_sdk.md#querying--searching-tasks)).
|
||||
Retrieve a Task object programmatically by querying the system based on either the Task ID,
|
||||
or project and name combination. You can also query tasks based on their properties, like tags (see [Querying Tasks](../../clearml_sdk/task_sdk.md#querying--searching-tasks)).
|
||||
|
||||
```python
|
||||
prev_task = Task.get_task(task_id='123456deadbeef')
|
||||
@ -28,7 +28,7 @@ on model performance, saving and comparing these between experiments is sometime
|
||||
|
||||
ClearML supports logging `argparse` module arguments out of the box, so once ClearML is integrated into the code, it automatically logs all parameters provided to the argument parser.
|
||||
|
||||
It's also possible to log parameter dictionaries (very useful when parsing an external config file and storing as a dict object),
|
||||
You can also log parameter dictionaries (very useful when parsing an external config file and storing as a dict object),
|
||||
whole configuration files, or even custom objects or [Hydra](https://hydra.cc/docs/intro/) configurations!
|
||||
|
||||
```python
|
||||
@ -139,9 +139,9 @@ This feature lets you easily get a full genealogy of every trained and used mode
|
||||
Full metrics logging is the key to finding the best performing model!
|
||||
By default, everything that's reported to TensorBoard and Matplotlib is automatically captured and logged.
|
||||
|
||||
Since not all metrics are tracked that way, it's also possible to manually report metrics using a [`Logger`](../../fundamentals/logger.md) object.
|
||||
Since not all metrics are tracked that way, you can also manually report metrics using a [`Logger`](../../fundamentals/logger.md) object.
|
||||
|
||||
It's possible to log everything, from time series data to confusion matrices to HTML, Audio and Video, to custom plotly graphs! Everything goes!
|
||||
You can log everything, from time series data to confusion matrices to HTML, Audio and Video, to custom plotly graphs! Everything goes!
|
||||
|
||||

|
||||
|
||||
@ -157,7 +157,7 @@ The experiment table is a powerful tool for creating dashboards and views of you
|
||||
|
||||
### Creating Leaderboards
|
||||
Customize the [experiments table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics and tags.
|
||||
It's possible to filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
|
||||
You can filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
|
||||
|
||||
Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.
|
||||
|
||||
@ -166,7 +166,7 @@ This is helpful to monitor your projects' progress, and to share it across the o
|
||||
|
||||
Any page is sharable by copying the URL from the address bar, allowing you to bookmark leaderboards or to send an exact view of a specific experiment or a comparison page.
|
||||
|
||||
It's also possible to tag Tasks for visibility and filtering allowing you to add more information on the execution of the experiment.
|
||||
You can also tag Tasks for visibility and filtering allowing you to add more information on the execution of the experiment.
|
||||
Later you can search based on task name in the search bar, and filter experiments based on their tags, parameters, status, and more.
|
||||
|
||||
## What's Next?
|
||||
|
@ -26,7 +26,7 @@ required python packages, and execute and monitor the process.
|
||||
|
||||
## Set up an Agent
|
||||
|
||||
1. Let's install the agent!
|
||||
1. Install the agent:
|
||||
|
||||
```bash
|
||||
pip install clearml-agent
|
||||
@ -42,7 +42,7 @@ required python packages, and execute and monitor the process.
|
||||
If you've already created credentials, you can copy-paste the default agent section from [here](https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L15) (this is optional. If the section is not provided the default values will be used)
|
||||
:::
|
||||
|
||||
1. Start the agent's daemon and assign it to a [queue](../../fundamentals/agents_and_queues.md#what-is-a-queue).
|
||||
1. Start the agent's daemon and assign it to a [queue](../../fundamentals/agents_and_queues.md#what-is-a-queue):
|
||||
|
||||
```bash
|
||||
clearml-agent daemon --queue default
|
||||
|
@ -214,7 +214,7 @@ if __name__ == '__main__':
|
||||
```
|
||||
|
||||
:::tip RUN PIPELINE CONTROLLER LOCALLY
|
||||
It is possible to run the pipeline logic itself locally, while keeping the pipeline components execution remote
|
||||
You can run the pipeline logic locally, while keeping the pipeline components execution remote
|
||||
(enqueued and executed by the clearml-agent). Pass `pipeline_execution_queue=None` to the `@PipelineDecorator.pipeline` decorator.
|
||||
```python
|
||||
@PipelineDecorator.pipeline(
|
||||
|
@ -62,7 +62,7 @@ For more information about how autoscalers work, see [Autoscalers Overview](../.
|
||||
|
||||

|
||||
|
||||
:::note Enterprise Feature
|
||||
:::important Enterprise Feature
|
||||
You can utilize the [configuration vault](../../webapp/webapp_profile.md#configuration-vault) to configure GCP
|
||||
credentials for the Autoscaler in the following format:
|
||||
|
||||
|
@ -498,7 +498,7 @@ The **USAGE & BILLING** section displays your ClearML workspace usage informatio
|
||||

|
||||
|
||||
To add users to your workspace, click **INVITE USERS** in the **USERS** section. This will redirect you to the
|
||||
**USER MANAGEMENT** page, where you can invite users (see details [here](#inviting-new-teammates))
|
||||
**USER MANAGEMENT** page, where you can invite users (see details [here](#inviting-new-teammates)).
|
||||
|
||||
### ClearML Pro
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user