mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-22 17:56:07 +00:00
Small edits (#420)
This commit is contained in:
parent
0addbc3549
commit
439d86a46b
@ -126,7 +126,7 @@ auto_connect_frameworks={'tensorboard': {'report_hparams': False}}
|
||||
Every `Task.init` call will create a new task for the current execution.
|
||||
In order to mitigate the clutter that a multitude of debugging tasks might create, a task will be reused if:
|
||||
* The last time it was executed (on this machine) was under 72 hours ago (configurable, see
|
||||
[`sdk.development.task_reuse_time_window_in_hours`](../configs/clearml_conf.md#task_reuse) of
|
||||
[`sdk.development.task_reuse_time_window_in_hours`](../configs/clearml_conf.md#task_reuse) in
|
||||
the ClearML configuration reference)
|
||||
* The previous task execution did not have any artifacts / models
|
||||
|
||||
|
@ -46,7 +46,7 @@ solution.
|
||||
* **Serving Service Task** - Control plane object storing configuration on all the endpoints. Support multiple separated
|
||||
instance, deployed on multiple clusters.
|
||||
|
||||
* **Inference Services** - Inference containers, performing model serving pre/post processing. Also supports CPU model
|
||||
* **Inference Services** - Inference containers, performing model serving pre/post-processing. Also supports CPU model
|
||||
inferencing.
|
||||
|
||||
* **Serving Engine Services** - Inference engine containers (e.g. Nvidia Triton, TorchServe etc.) used by the Inference
|
||||
|
@ -72,7 +72,7 @@ The following page goes over how to set up and upgrade `clearml-serving`.
|
||||
```
|
||||
|
||||
:::note
|
||||
Any model that registers with Triton engine will run the pre/post processing code on the Inference service container,
|
||||
Any model that registers with Triton engine will run the pre/post-processing code on the Inference service container,
|
||||
and the model inference itself will be executed on the Triton Engine container.
|
||||
:::
|
||||
|
||||
|
@ -414,7 +414,7 @@ match_rules: [
|
||||
**`agent.package_manager`** (*dict*)
|
||||
|
||||
* Dictionary containing the options for the Python package manager. The currently supported package managers are pip, conda,
|
||||
and, if the repository contains a poetry.lock file, poetry.
|
||||
and, if the repository contains a `poetry.lock` file, poetry.
|
||||
|
||||
---
|
||||
|
||||
|
@ -90,7 +90,7 @@ optimization.
|
||||
optimizer = HyperParameterOptimizer(
|
||||
# specifying the task to be optimized, task must be in system already so it can be cloned
|
||||
base_task_id=TEMPLATE_TASK_ID,
|
||||
# setting the hyper-parameters to optimize
|
||||
# setting the hyperparameters to optimize
|
||||
hyper_parameters=[
|
||||
UniformIntegerParameterRange('number_of_epochs', min_value=2, max_value=12, step_size=2),
|
||||
UniformIntegerParameterRange('batch_size', min_value=2, max_value=16, step_size=2),
|
||||
|
@ -7,11 +7,11 @@ title: Tasks
|
||||
A Task is a single code execution session, which can represent an experiment, a step in a workflow, a workflow controller,
|
||||
or any custom implementation you choose.
|
||||
|
||||
To transform an existing script into a **ClearML Task**, one must call the [Task.init()](../references/sdk/task.md#taskinit) method
|
||||
To transform an existing script into a **ClearML Task**, one must call the [`Task.init()`](../references/sdk/task.md#taskinit) method
|
||||
and specify a task name and its project. This creates a Task object that automatically captures code execution
|
||||
information as well as execution outputs.
|
||||
|
||||
All the information captured by a task is by default uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md)
|
||||
All the information captured by a task is by default uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md),
|
||||
and it can be visualized in the [ClearML WebApp](../webapp/webapp_overview.md) (UI). ClearML can also be configured to upload
|
||||
model checkpoints, artifacts, and charts to cloud storage (see [Storage](../integrations/storage.md)). Additionally,
|
||||
you can work with tasks in Offline Mode, in which all information is saved in a local folder (see
|
||||
@ -110,7 +110,7 @@ Available task types are:
|
||||
* *controller* - A task that lays out the logic for other tasks’ interactions, manual or automatic (e.g. a pipeline
|
||||
controller)
|
||||
* *optimizer* - A specific type of controller for optimization tasks (e.g. [hyperparameter optimization](hpo.md))
|
||||
* *service* - Long lasting or recurring service (e.g. server cleanup, auto ingress, sync services etc)
|
||||
* *service* - Long lasting or recurring service (e.g. server cleanup, auto ingress, sync services etc.)
|
||||
* *monitor* - A specific type of service for monitoring
|
||||
* *application* - A task implementing custom applicative logic, like [auto-scaler](../guides/services/aws_autoscaler.md)
|
||||
or [clearml-session](../apps/clearml_session.md)
|
||||
|
@ -132,8 +132,8 @@ Now, [command-line arguments](../../fundamentals/hyperparameters.md#tracking-hyp
|
||||
|
||||
Sit back, relax, and watch your models converge :) or continue to see what else can be done with ClearML [here](ds_second_steps.md).
|
||||
|
||||
## Youtube Playlist
|
||||
## YouTube Playlist
|
||||
|
||||
Or watch the Youtube Getting Started Playlist on our Youtube Channel!
|
||||
Or watch the YouTube Getting Started Playlist on our YouTube Channel!
|
||||
|
||||
[](https://www.youtube.com/watch?v=bjWwZAzDxTY&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=2)
|
||||
|
@ -181,8 +181,8 @@ or check these pages out:
|
||||
- Improve your experiments with [HyperParameter Optimization](../../fundamentals/hpo.md)
|
||||
- Check out ClearML's integrations to [external libraries](../../integrations/libraries.md).
|
||||
|
||||
## Youtube Playlist
|
||||
## YouTube Playlist
|
||||
|
||||
All these tips and tricks are also covered by our Youtube Getting Started series, go check it out :)
|
||||
All these tips and tricks are also covered by our YouTube Getting Started series, go check it out :)
|
||||
|
||||
[](https://www.youtube.com/watch?v=kyOfwVg05EM&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=3)
|
@ -64,7 +64,7 @@ Cloning a task duplicates the task’s configuration, but not its outputs.
|
||||
|
||||
**To clone an experiment in the ClearML WebApp:**
|
||||
1. Click on any project card to open its [experiments table](../../webapp/webapp_exp_table.md)
|
||||
1. Right click one of the experiments on the table
|
||||
1. Right-click one of the experiments on the table
|
||||
1. Click **Clone** in the context menu, which will open a **CLONE EXPERIMENT** window.
|
||||
1. Click **CLONE** in the window.
|
||||
|
||||
@ -76,7 +76,7 @@ Docker container image to be used, or change the hyperparameters and configurati
|
||||
Once you have set up an experiment, it is now time to execute it.
|
||||
|
||||
**To execute an experiment through the ClearML WebApp:**
|
||||
1. Right click your draft experiment (the context menu is also available through the <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
|
||||
1. Right-click your draft experiment (the context menu is also available through the <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
|
||||
button on the top right of the experiment’s info panel)
|
||||
1. Click **ENQUEUE,** which will open the **ENQUEUE EXPERIMENT** window
|
||||
1. In the window, select `default` in the queue menu
|
||||
|
@ -27,7 +27,7 @@ clearml-data sync --folder ./from_production
|
||||
We could also add a Tag `latest` to the Dataset, marking it as the latest version.
|
||||
|
||||
### Preprocessing Data
|
||||
The second step is to preprocess the date. First we need to access it, then we want to modify it
|
||||
The second step is to preprocess the date. First we need to access it, then we want to modify it,
|
||||
and lastly we want to create a new version of the data.
|
||||
|
||||
```python
|
||||
|
@ -15,7 +15,7 @@ which always returns the main Task.
|
||||
## Hyperparameters
|
||||
|
||||
ClearML automatically logs the command line options defined with `argparse`. A parameter dictionary is logged by
|
||||
connecting it to the Task using a call to the [Task.connect](../../references/sdk/task.md#connect) method.
|
||||
connecting it to the Task using a call to the [`Task.connect`](../../references/sdk/task.md#connect) method.
|
||||
|
||||
```python
|
||||
additional_parameters = {
|
||||
|
@ -38,7 +38,7 @@ The example calls Matplotlib methods to log debug sample images. They appear in
|
||||
## Hyperparameters
|
||||
|
||||
ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task, by
|
||||
calling the [Task.connect](../../../references/sdk/task.md#connect) method.
|
||||
calling the [`Task.connect`](../../../references/sdk/task.md#connect) method.
|
||||
|
||||
```python
|
||||
task_params = {'num_scatter_samples': 60, 'sin_max_value': 20, 'sin_steps': 30}
|
||||
|
@ -53,7 +53,7 @@ Text printed to the console for training progress, as well as all other console
|
||||
|
||||
## Configuration Objects
|
||||
|
||||
In the experiment code, a configuration dictionary is connected to the Task by calling the [Task.connect](../../../references/sdk/task.md#connect)
|
||||
In the experiment code, a configuration dictionary is connected to the Task by calling the [`Task.connect`](../../../references/sdk/task.md#connect)
|
||||
method.
|
||||
|
||||
```python
|
||||
|
@ -33,9 +33,15 @@ By doubling clicking a thumbnail, you can view a spectrogram plot in the image v
|
||||
ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task using
|
||||
a call to the [`Task.connect`](../../../../../references/sdk/task.md#connect) method.
|
||||
|
||||
configuration_dict = {'number_of_epochs': 10, 'batch_size': 4, 'dropout': 0.25, 'base_lr': 0.001}
|
||||
```python
|
||||
configuration_dict = {
|
||||
'number_of_epochs': 10,
|
||||
'batch_size': 4,
|
||||
'dropout': 0.25,
|
||||
'base_lr': 0.001
|
||||
}
|
||||
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
|
||||
|
||||
```
|
||||
Parameter dictionaries appear in **CONFIGURATION** **>** **HYPER PARAMETERS** **>** **General**.
|
||||
|
||||

|
||||
|
@ -27,7 +27,7 @@ optimizer task's **CONFIGURATION** **>** **HYPER PARAMETERS**.
|
||||
```python
|
||||
optimizer = HyperParameterOptimizer(
|
||||
base_task_id=TEMPLATE_TASK_ID, # This is the experiment we want to optimize
|
||||
# here we define the hyper-parameters to optimize
|
||||
# here we define the hyperparameters to optimize
|
||||
hyper_parameters=[
|
||||
UniformIntegerParameterRange('number_of_epochs', min_value=2, max_value=12, step_size=2),
|
||||
UniformIntegerParameterRange('batch_size', min_value=2, max_value=16, step_size=2),
|
||||
|
@ -26,8 +26,12 @@ method.
|
||||
|
||||
For example, the raw data is read into a Pandas DataFrame named `train_set`, and the `head` of the DataFrame is reported.
|
||||
|
||||
```python
|
||||
train_set = pd.read_csv(Path(path_to_ShelterAnimal) / 'train.csv')
|
||||
Logger.current_logger().report_table(title='ClearMLet - raw',series='pandas DataFrame',iteration=0, table_plot=train_set.head())
|
||||
Logger.current_logger().report_table(
|
||||
title='ClearMLet - raw',series='pandas DataFrame',iteration=0, table_plot=train_set.head()
|
||||
)
|
||||
```
|
||||
|
||||
The tables appear in **PLOTS**.
|
||||
|
||||
@ -35,12 +39,14 @@ The tables appear in **PLOTS**.
|
||||
|
||||
## Hyperparameters
|
||||
|
||||
A parameter dictionary is logged by connecting it to the Task using a call to the [Task.connect](../../../../../references/sdk/task.md#connect)
|
||||
A parameter dictionary is logged by connecting it to the Task using a call to the [`Task.connect`](../../../../../references/sdk/task.md#connect)
|
||||
method.
|
||||
|
||||
```python
|
||||
logger = task.get_logger()
|
||||
configuration_dict = {'test_size': 0.1, 'split_random_state': 0}
|
||||
configuration_dict = task.connect(configuration_dict)
|
||||
```
|
||||
|
||||
Parameter dictionaries appear in the **General** subsection.
|
||||
|
||||
|
@ -50,7 +50,7 @@ The single scalar plot for loss appears in **SCALARS**.
|
||||
|
||||
ClearML automatically logs the command line options defined using `argparse`.
|
||||
|
||||
A parameter dictionary is logged by connecting it to the Task using a call to the [Task.connect](../../../references/sdk/task.md#connect)
|
||||
A parameter dictionary is logged by connecting it to the Task using a call to the [`Task.connect`](../../../references/sdk/task.md#connect)
|
||||
method.
|
||||
|
||||
```python
|
||||
|
@ -8,6 +8,6 @@ slug: /guides
|
||||
To help learn and use ClearML, we provide example scripts that demonstrate how to use ClearML's various features.
|
||||
|
||||
Examples scripts are in the [examples](https://github.com/allegroai/clearml/tree/master/examples) folder of the GitHub `clearml`
|
||||
repository. They are also pre-loaded in the **ClearML Server**:
|
||||
repository. They are also preloaded in the **ClearML Server**:
|
||||
|
||||
Each examples folder in the GitHub ``clearml`` repository contains a ``requirements.txt`` file for example scripts in that folder.
|
||||
|
@ -37,7 +37,7 @@ experiment runs. Some possible destinations include:
|
||||
* Google Cloud Storage
|
||||
* Azure Storage.
|
||||
|
||||
Specify the output location in the `output_uri` parameter of the [Task.init](../../references/sdk/task.md#taskinit) method.
|
||||
Specify the output location in the `output_uri` parameter of the [`Task.init`](../../references/sdk/task.md#taskinit) method.
|
||||
In this tutorial, we specify a local folder destination.
|
||||
|
||||
In `pytorch_mnist_tutorial.py`, change the code from:
|
||||
|
@ -40,7 +40,7 @@ ClearML automatically logs TensorFlow Definitions, whether they are defined befo
|
||||
flags.DEFINE_string('echo', None, 'Text to echo.')
|
||||
flags.DEFINE_string('another_str', 'My string', 'A string', module_name='test')
|
||||
|
||||
task = Task.init(project_name='examples', task_name='hyper-parameters example')
|
||||
task = Task.init(project_name='examples', task_name='hyperparameters example')
|
||||
|
||||
flags.DEFINE_integer('echo3', 3, 'Text to echo.')
|
||||
|
||||
@ -54,7 +54,7 @@ TensorFlow Definitions appear in **HYPER PARAMETERS** **>** **TF_DEFINE**.
|
||||
|
||||
## Parameter Dictionaries
|
||||
|
||||
Connect a parameter dictionary to a Task by calling the [Task.connect](../../references/sdk/task.md#connect)
|
||||
Connect a parameter dictionary to a Task by calling the [`Task.connect`](../../references/sdk/task.md#connect)
|
||||
method, and ClearML logs the parameters. ClearML also tracks changes to the parameters.
|
||||
|
||||
```python
|
||||
|
@ -53,6 +53,6 @@ ClearML reports these images as debug samples in the **ClearML Web UI**, under t
|
||||
|
||||

|
||||
|
||||
Double click a thumbnail, and the image viewer opens.
|
||||
Double-click a thumbnail, and the image viewer opens.
|
||||
|
||||

|
@ -38,7 +38,7 @@ Logger.current_logger().report_media(
|
||||
)
|
||||
```
|
||||
|
||||
The reported audio can be viewed in the **DEBUG SAMPLES** tab. Double click a thumbnail, and the audio player opens.
|
||||
The reported audio can be viewed in the **DEBUG SAMPLES** tab. Double-click a thumbnail, and the audio player opens.
|
||||
|
||||

|
||||
|
||||
@ -55,6 +55,6 @@ Logger.current_logger().report_media(
|
||||
)
|
||||
```
|
||||
|
||||
The reported video can be viewed in the **DEBUG SAMPLES** tab. Double click a thumbnail, and the video player opens.
|
||||
The reported video can be viewed in the **DEBUG SAMPLES** tab. Double-click a thumbnail, and the video player opens.
|
||||
|
||||

|
||||
|
@ -75,7 +75,7 @@ The script supports the following additional command line options:
|
||||
Mutually exclusive to `exclude_users`.
|
||||
* `exclude_users` - Only report tasks that were NOT initiated by these users (usernames and user IDs are accepted).
|
||||
Mutually exclusive to `include_users`.
|
||||
* `verbose` - If `True`, will increase verbosity of messages (such as when when tasks are polled but filtered away).
|
||||
* `verbose` - If `True`, will increase verbosity of messages (such as when tasks are polled but filtered away).
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -21,10 +21,12 @@ class. The storage examples include:
|
||||
To download a ZIP file from storage to the `global` cache context, call the [StorageManager.get_local_copy](../../references/sdk/storage.md#storagemanagerget_local_copy)
|
||||
method, and specify the destination location as the `remote_url` argument:
|
||||
|
||||
```python
|
||||
# create a StorageManager instance
|
||||
manager = StorageManager()
|
||||
|
||||
manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.zip")
|
||||
```
|
||||
|
||||
:::note
|
||||
Zip and tar.gz files will be automatically extracted to cache. This can be controlled with the`extract_archive` flag.
|
||||
@ -32,11 +34,15 @@ Zip and tar.gz files will be automatically extracted to cache. This can be contr
|
||||
|
||||
To download a file to a specific context in cache, specify the name of the context as the `cache_context` argument:
|
||||
|
||||
```python
|
||||
manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", cache_context="test")
|
||||
```
|
||||
|
||||
To download a non-compressed file, set the `extract_archive` argument to `False`.
|
||||
|
||||
```python
|
||||
manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extract_archive=False)
|
||||
```
|
||||
|
||||
By default, the `StorageManager` reports its download progress to the console every 5MB. You can change this using the
|
||||
[`StorageManager.set_report_download_chunk_size`](../../references/sdk/storage.md#storagemanagerset_report_download_chunk_size)
|
||||
@ -48,7 +54,11 @@ To upload a file to storage, call the [StorageManager.upload_file](../../referen
|
||||
method. Specify the full path of the local file as the `local_file` argument, and the remote URL as the `remote_url`
|
||||
argument.
|
||||
|
||||
manager.upload_file(local_file="/mnt/data/also_file.ext", remote_url="s3://MyBucket/MyFolder")
|
||||
```python
|
||||
manager.upload_file(
|
||||
local_file="/mnt/data/also_file.ext", remote_url="s3://MyBucket/MyFolder"
|
||||
)
|
||||
```
|
||||
|
||||
Use the `retries parameter` to set the number of times file upload should be retried in case of failure.
|
||||
|
||||
@ -63,4 +73,6 @@ To set a limit on the number of files cached, call the [StorageManager.set_cache
|
||||
method and specify the `cache_file_limit` argument as the maximum number of files. This does not limit the cache size,
|
||||
only the number of files.
|
||||
|
||||
```python
|
||||
new_cache_limit = manager.set_cache_file_limit(cache_file_limit=100)
|
||||
```
|
@ -495,7 +495,7 @@ myDataView.add_mapping_rule(
|
||||
|
||||
### Accessing Frames
|
||||
|
||||
Dataview objects can be retrieved by the Dataview ID or name using the [DataView.get](../references/hyperdataset/dataview.md#dataviewget)
|
||||
Dataview objects can be retrieved by the Dataview ID or name using the [`DataView.get`](../references/hyperdataset/dataview.md#dataviewget)
|
||||
class method.
|
||||
|
||||
```python
|
||||
|
@ -67,7 +67,7 @@ Access these actions with the context menu in any of the following ways:
|
||||
|
||||
| ClearML Action | Description |
|
||||
|---|---|
|
||||
| Details | View Dataview details, including input datasets, label mapping, augmentation operations, and iteration control. Can also be accessed by double clicking a Dataview in the Dataviews table. |
|
||||
| Details | View Dataview details, including input datasets, label mapping, augmentation operations, and iteration control. Can also be accessed by double-clicking a Dataview in the Dataviews table. |
|
||||
| Archive | To more easily work with active Dataviews, move a Dataview to the archive, removing it from the active Dataview table. |
|
||||
| Restore | Action available in the archive. Restore a Dataview to the active Dataviews table. |
|
||||
| Clone | Make an exact copy of a Dataview that is editable. |
|
||||
|
@ -87,7 +87,7 @@ if there is a change in the pipeline code. If there is no change, the pipeline r
|
||||
### Tracking Pipeline Progress
|
||||
ClearML automatically tracks a pipeline’s progress percentage: the number of pipeline steps completed out of the total
|
||||
number of steps. For example, if a pipeline consists of 4 steps, after the first step completes, ClearML automatically
|
||||
sets its progress value to 25. Once a pipeline has started to run but is yet to successfully finish, , the WebApp will
|
||||
sets its progress value to 25. Once a pipeline has started to run but is yet to successfully finish, the WebApp will
|
||||
show the pipeline’s progress indication in the pipeline runs table, next to the run’s status.
|
||||
|
||||
## Examples
|
||||
|
@ -157,8 +157,8 @@ arguments.
|
||||
#### pre_execute_callback & post_execute_callback
|
||||
Callbacks can be utilized to control pipeline execution flow.
|
||||
|
||||
A `pre_execute_callback` function is called when the step is created and before it is sent for execution. This allows a
|
||||
user to modify the task before launch. Use node.job to access the [ClearmlJob](../references/sdk/automation_job_clearmljob.md)
|
||||
A `pre_execute_callback` function is called when the step is created, and before it is sent for execution. This allows a
|
||||
user to modify the task before launch. Use `node.job` to access the [ClearmlJob](../references/sdk/automation_job_clearmljob.md)
|
||||
object, or node.job.task to directly access the Task object. Parameters are the configuration arguments passed to the
|
||||
ClearmlJob.
|
||||
|
||||
|
@ -100,7 +100,7 @@ Access these actions with the context menu in any of the following ways:
|
||||
|
||||
| Action | Description | States Valid for the Action | State Transition |
|
||||
|---|---|---|---|
|
||||
| Details | View pipeline details. Can also be accessed by double clicking a run in the pipeline runs table. | Any state | None |
|
||||
| Details | View pipeline details. Can also be accessed by double-clicking a run in the pipeline runs table. | Any state | None |
|
||||
| Run | Create a new pipeline run. Configure and enqueue it for execution. See [Create Run](#create-run). | Any State | *Pending* |
|
||||
| Abort | Manually stop / cancel a run. | *Running* / *Pending* | *Aborted* |
|
||||
| Continue | Rerun with the same parameters. | *Aborted* | *Pending* |
|
||||
|
@ -33,7 +33,7 @@ When archiving an experiment:
|
||||
|
||||
* Restore an experiment or model from either the:
|
||||
|
||||
* Experiments or models table - Right click the experiment or model **>** **Restore**.
|
||||
* Experiments or models table - Right-click the experiment or model **>** **Restore**.
|
||||
* Info panel or full screen details view - Click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bars menu" className="icon size-sm space-sm" />
|
||||
(menu) **>** **Restore from Archive**.
|
||||
|
||||
|
@ -33,7 +33,7 @@ Experiments can also be modified and then executed remotely, see [Tuning Experim
|
||||
|
||||
The experiment's status becomes *Draft*.
|
||||
|
||||
1. Enqueue the experiment for execution. Right click the experiment **>** **Enqueue** **>** Select a queue **>** **ENQUEUE**.
|
||||
1. Enqueue the experiment for execution. Right-click the experiment **>** **Enqueue** **>** Select a queue **>** **ENQUEUE**.
|
||||
|
||||
The experiment's status becomes *Pending*. When a worker fetches the Task (experiment), the status becomes *Running*.
|
||||
The experiment can now be tracked and its results visualized.
|
@ -137,7 +137,7 @@ Access these actions with the context menu in any of the following ways:
|
||||
|
||||
| Action | Description | States Valid for the Action | State Transition |
|
||||
|---|---|---|---|
|
||||
| Details | Open the experiment's [info panel](webapp_exp_track_visual.md#info-panel) (keeps the experiments list in view). Can also be accessed by double clicking an experiment in the experiments table. | Any state | None |
|
||||
| Details | Open the experiment's [info panel](webapp_exp_track_visual.md#info-panel) (keeps the experiments list in view). Can also be accessed by double-clicking an experiment in the experiments table. | Any state | None |
|
||||
| View Full Screen | View experiment details in [full screen](webapp_exp_track_visual.md#full-screen-details-view). | Any state | None |
|
||||
| Manage Queue | If an experiment is *Pending* in a queue, view the utilization of that queue, manage that queue (remove experiments and change the order of experiments), and view information about the worker(s) listening to the queue. See the [Workers and Queues](webapp_workers_queues.md) page. | *Enqueued* | None |
|
||||
| View Worker | If an experiment is *Running*, view resource utilization, worker details, and queues to which a worker is listening. | *Running* | None |
|
||||
|
@ -26,7 +26,7 @@ Tune experiments and edit an experiment's execution details, then execute the tu
|
||||
|
||||
1. Edit the experiment. See [modifying experiments](#modifying-experiments).
|
||||
|
||||
1. Enqueue the experiment for execution. Right click the experiment **>** **Enqueue** **>** Select a queue **>**
|
||||
1. Enqueue the experiment for execution. Right-click the experiment **>** **Enqueue** **>** Select a queue **>**
|
||||
**ENQUEUE**.
|
||||
|
||||
The experiment's status becomes *Pending*. When the worker assigned to the queue fetches the Task (experiment), the
|
||||
|
Loading…
Reference in New Issue
Block a user