Small edits (#455)

This commit is contained in:
pollfly 2023-01-25 13:25:29 +02:00 committed by GitHub
parent 18e3e7abe2
commit 61f822e613
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
31 changed files with 51 additions and 51 deletions

View File

@ -80,7 +80,7 @@ The following are the parameter type options and their corresponding fields:
- `"values": List[Any]`- A list of valid parameter values to sample from
For example: to specify a parameter search over uniform ranges of layer_1 and layer_2 sizes between 128 and 512
(in jumps of 128) with varying batch sizes of 96, 128, and 160, use the following command:
(in jumps of 128) with varying batch sizes of 96, 128, and 160, use the following command:
<div className="wb-normal">

View File

@ -483,7 +483,7 @@ Self-hosted [ClearML Server](deploying_clearml/clearml_server.md) comes by defau
By default, the server is open and does not require username and password, but it can be [password-protected](deploying_clearml/clearml_server_security.md#user-access-security).
In case it is password-protected, the services agent will need to be configured with server credentials (associated with a user).
To do that, set these environment variables on the ClearML Server machine with the appropriate credentials:
To do that, set these environment variables on the ClearML Server machine with the appropriate credentials:
```
CLEARML_API_ACCESS_KEY
CLEARML_API_SECRET_KEY
@ -499,7 +499,7 @@ Build a Docker container that when launched executes a specific experiment, or a
```bash
clearml-agent build --id <task-id> --docker --target <new-docker-name> --entry-point reuse_task
```
- Build a Docker container that at launch will clone a Task specified by Task ID, and will execute the newly cloned Task.
- Build a Docker container that at launch will clone a Task specified by Task ID, and will execute the newly cloned Task.
```bash
clearml-agent build --id <task-id> --docker --target <new-docker-name> --entry-point clone_task
```

View File

@ -37,7 +37,7 @@ clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_mo
```
:::info Service ID
Make sure that you have executed `clearml-servings`'s
Make sure that you have executed `clearml-serving`'s
[initial setup](clearml_serving.md#initial-setup), in which you create a Serving Service.
The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands
:::
@ -92,7 +92,7 @@ or with the `clearml-serving` CLI.
```
You now have a new Model named `manual sklearn model` in the `serving examples` project. The CLI output prints
the UID of the new model, which you will use it to register a new endpoint.
the UID of the new model, which you will use to register a new endpoint.
In the [ClearML web UI](../webapp/webapp_overview.md), the new model is listed under the **Models** tab of its project.
You can also download the model file itself directly from the web UI.
@ -105,7 +105,7 @@ or with the `clearml-serving` CLI.
:::info Model Storage
You can also provide a different storage destination for the model, such as S3/GS/Azure, by passing
`--destination="s3://bucket/folder"`, `gs://bucket/folder`, `azure://bucket/folder`. There is no need to provide a unique
path tp the destination argument, the location of the model will be a unique path based on the serving service ID and the
path to the destination argument, the location of the model will be a unique path based on the serving service ID and the
model name
:::
@ -116,7 +116,7 @@ model name
The ClearML Serving Service supports automatic model deployment and upgrades, which is connected with the model
repository and API. When the model auto-deploy is configured, new model versions will be automatically deployed when you
`publish` or `tag` a new model in the ClearML model repository. This automation interface allows for simpler CI/CD model
deployment process, as a single API automatically deploy (or remove) a model from the Serving Service.
deployment process, as a single API automatically deploys (or removes) a model from the Serving Service.
#### Automatic Model Deployment Example
@ -142,7 +142,7 @@ deployment process, as a single API automatically deploy (or remove) a model fro
### Canary Endpoint Setup
Canary endpoint deployment add a new endpoint where the actual request is sent to a preconfigured set of endpoints with
Canary endpoint deployment adds a new endpoint where the actual request is sent to a preconfigured set of endpoints with
pre-provided distribution. For example, let's create a new endpoint "test_model_sklearn_canary", you can provide a list
of endpoints and probabilities (weights).
@ -195,13 +195,13 @@ Example:
ClearML serving instances send serving statistics (count/latency) automatically to Prometheus and Grafana can be used
to visualize and create live dashboards.
The default docker-compose installation is preconfigured with Prometheus and Grafana, do notice that by default data/ate
The default docker-compose installation is preconfigured with Prometheus and Grafana. Notice that by default data/ate
of both containers is *not* persistent. To add persistence, we recommend adding a volume mount.
You can also add many custom metrics on the input/predictions of your models. Once a model endpoint is registered,
adding custom metric can be done using the CLI.
For example, assume the mock scikit-learn model is deployed on endpoint `test_model_sklearn`, you can log the requests
For example, assume the mock scikit-learn model is deployed on endpoint `test_model_sklearn`, you can log the requests
inputs and outputs (see examples/sklearn/preprocess.py example):
```bash

View File

@ -55,7 +55,7 @@ help maintainers reproduce the problem:
* **Describe the exact steps necessary to reproduce the problem** in as much detail as possible. Please do not just summarize what you did. Make sure to explain how you did it.
* **Provide the specific environment setup.** Include the ``pip freeze`` output, specific environment variables, Python version, and other relevant information.
* **Provide specific examples to demonstrate the steps.** Include links to files or GitHub projects, or copy / paste snippets which you use in those examples.
* **If you are reporting any ClearML crash,** include a crash report with a stack trace from the operating system. Make
* **If you are reporting any ClearML crash,** include a crash report with a stack trace from the operating system. Make
sure to add the crash report in the issue and place it in a [code block](https://docs.github.com/en/github/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks),
a [file attachment](https://help.github.com/articles/file-attachments-on-issues-and-pull-requests), or just put it in
a [gist](https://gist.github.com) (and provide a link to that gist).

View File

@ -73,7 +73,7 @@ The minimum requirements for ClearML Server are:
## Restarting
**To restart ClearML Server Docker deployment:**
**To restart ClearML Server Docker deployment:**
* Stop and then restart the Docker containers by executing the following commands:

View File

@ -72,6 +72,6 @@ the models associated with a project are listed.
## SDK Interface
See [the Models SDK interface](../clearml_sdk/model_sdk.md) for an overview for using the most basic Pythonic methods of the model
classes. See a detailed list of all available methods in the [Model](../references/sdk/model_model.md), [OutputModel](../references/sdk/model_outputmodel.md), and [InputModel](../references/sdk/model_inputmodel.md)
classes. See a detailed list of all available methods in the [Model](../references/sdk/model_model.md), [OutputModel](../references/sdk/model_outputmodel.md), and [InputModel](../references/sdk/model_inputmodel.md)
reference pages.

View File

@ -32,7 +32,7 @@ pip install clearml
Please create new clearml credentials through the settings page in your `clearml-server` web app,
or create a free account at https://app.clear.ml/settings/webapp-configuration
In the settings > workspace page, press "Create new credentials", then press "Copy to clipboard".
In the settings > workspace page, press "Create new credentials", then press "Copy to clipboard".
Paste copied configuration here:
```

View File

@ -40,7 +40,7 @@ Check [this](../../fundamentals/hyperparameters.md) out for all Hyperparameter l
## Log Artifacts
ClearML allows you to easily store the output products of an experiment - Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
ClearML allows you to easily store the output products of an experiment - Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
Essentially, artifacts are files (or python objects) uploaded from a script and are stored alongside the Task.
These Artifacts can be easily accessed by the web UI or programmatically.
@ -157,7 +157,7 @@ The experiment table is a powerful tool for creating dashboards and views of you
### Creating Leaderboards
Customize the [experiments table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics and tags.
It's possible to filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
It's possible to filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.

View File

@ -45,7 +45,7 @@ The structure of your pipeline will be derived from looking at this `parents` ar
Now we do the same for the final step. However, remember the empty hyperparameters we saw before? We still have to overwrite these. We can use the `parameter_override` argument to do just that.
For example, we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task.
For example, we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name, and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task.
And thats it! We now have our first pipeline!

View File

@ -29,7 +29,7 @@ it is commented out, make sure to uncomment the line. We will use the example sc
1. Search for and go to `docker_force_pull` in the document, and make sure that it is set to `true`, so that your docker
image will be updated.
1. Run the `clearml-agent` in docker mode: `clearml-agent daemon --docker --queue default`. The agent will use the default
1. Run the `clearml-agent` in docker mode: `clearml-agent daemon --docker --queue default`. The agent will use the default
Cuda/Nvidia Docker Image.
1. Enqueue any ClearML Task to the `default` queue, which the Agent is now listening to. The Agent pulls the Task, and then reproduces it,

View File

@ -6,7 +6,7 @@ The [cifar_ignite.py](https://github.com/allegroai/clearml/blob/master/examples/
script integrates ClearML into code that uses [PyTorch Ignite](https://github.com/pytorch/ignite).
The example script does the following:
* Trains a neural network on the CIFAR10 dataset for image classification.
* Trains a neural network on the CIFAR10 dataset for image classification.
* Creates a [ClearML Task](../../../fundamentals/task.md) named `image classification CIFAR10`, which is associated with
the `examples` project.
* Calls the [`Task.connect`](../../../references/sdk/task.md#connect) method to track experiment configuration.

View File

@ -68,7 +68,7 @@ The sections below describe in more detail what happens in the controller task a
Custom configuration values specific to this step execution are defined through the `parameter_override` parameter,
where the first steps artifact is fed into the second step.
Special pre-execution and post-execution logic is added for this step through the use of `pre_execute_callback`
Special pre-execution and post-execution logic is added for this step through the use of `pre_execute_callback`
and `post_execute_callback` respectively.
```python

View File

@ -35,7 +35,7 @@ logged as required packages for the pipeline execution step.
```
1. Set the default execution queue to be used. All the pipeline steps will be enqueued for execution in this queue
(unless overridden by the `execution_queue` parameter of the `add_function_step` method).
(unless overridden by the `execution_queue` parameter of the `add_function_step` method).
```python
pipe.set_default_execution_queue('default')

View File

@ -18,8 +18,8 @@ to your needs, and enqueue it for execution directly from the ClearML UI.
Configure the task execution by modifying the `args` dictionary:
* `delete_threshold_days` - Tasks older than this number of days will be deleted. The default value is 30 days.
* `cleanup_period_in_days` - Repeat the cleanup service at this interval, in days. The default value is 1.0 (run once a day).
* `force_delete` - If `False` (default), delete only Draft tasks. If `True`, allows deletion of tasks in any status.
* `run_as_service` - If `True` (default), the task will be enqueued for remote execution (default queue: "services"). Otherwise, the script will execute locally.
* `force_delete` - If `False` (default), delete only Draft tasks. If `True`, allows deletion of tasks in any status.
* `run_as_service` - If `True` (default), the task will be enqueued for remote execution (default queue: "services"). Otherwise, the script will execute locally.
:::note Remote Execution
If `run_as_service` is set to `True`, make sure a `clearml-agent` is assigned to the `services` queue.
@ -48,7 +48,7 @@ This is followed by details from the cleanup.
an `APIClient` object that establishes a session with the ClearML Server, and accomplishes the cleanup by calling:
* [`Tasks.get_all`](../../references/api/tasks.md#post-tasksget_all) to get a list of Tasks to delete, providing the following parameters:
* `system_tags` - Get only Tasks tagged as `archived`.
* `status_changed` - Get Tasks whose last status change is older than then delete threshold (in seconds).
* `status_changed` - Get Tasks whose last status change is older than the delete threshold (in seconds).
* [`Task.delete`](../../references/sdk/task.md#delete) - Delete a Task.
## Configuration

View File

@ -124,7 +124,7 @@ Dataset.delete(
```
This supports deleting sources located in AWS S3, GCP, and Azure Storage (not local storage). The `delete_sources`
parameter is ignored if `delete_all_versions` is `False`. You can view the deletion process progress by passing
parameter is ignored if `delete_all_versions` is `False`. You can view the deletion process progress by passing
`show_progress=True` (`tqdm` required).
### Tagging Datasets
@ -147,7 +147,7 @@ MyDataset.remove_tags(["dogs"])
Dataset versioning refers to the group of ClearML Enterprise SDK and WebApp (UI) features for creating,
modifying, and deleting Dataset versions.
ClearML Enterprise supports simple and advanced Dataset versioning paradigms. A **simple version structure** consists of
ClearML Enterprise supports simple and advanced Dataset versioning paradigms. A **simple version structure** consists of
a single evolving version, with historic static snapshots. Continuously push your changes to your single dataset version,
and take a snapshot to record the content of your dataset at a specific point in time.

View File

@ -26,7 +26,7 @@ a SingleFrame:
* Metadata and data for the labeled area of an image
See [Example 1](#example-1), which shows `masks` in `sources`, `mask` in `rois`, and the key-value pairs used to relate
See [Example 1](#example-1), which shows `masks` in `sources`, `mask` in `rois`, and the key-value pairs used to relate
a mask to its source in a frame.

View File

@ -188,7 +188,7 @@ This example demonstrates `sources` for video, `masks`, and `preview`.
This frame shows the `masks` section in `sources`, and the top-level `rois` array.
In `sources`, the `masks` subsection contains the sources for the two masks associated with the raw data.
In `sources`, the `masks` subsection contains the sources for the two masks associated with the raw data.
The raw mask data is located in:

View File

@ -47,7 +47,7 @@ The version information is presented in the following tabs:
* [Info](#info)
## Frames
The **Frames** tab displays the contents of the selected dataset version.
The **Frames** tab displays the contents of the selected dataset version.
View the version's frames as thumbnail previews or in a table. Use the view toggle to switch between thumbnail
view <img src="/docs/latest/icons/ico-grid-view.svg" alt="thumbnail view" className="icon size-md space-sm" /> and
@ -71,7 +71,7 @@ To view the details of a specific frame, click on its preview, which will open t
### Simple Frame Filtering
Simple frame filtering returns frames containing at least one annotation with a specified label.
**To apply a simple frame filter,** select a label from the **LABEL FILTER** list.
**To apply a simple frame filter,** select a label from the **LABEL FILTER** list.
<details className="cml-expansion-panel screenshot">
<summary className="cml-expansion-panel-summary">Simple filter example</summary>

View File

@ -2,11 +2,11 @@
title: The Dataviews Table
---
[Dataviews](../dataviews.md) appear in the same Project as the experiment that stored the Dataview in the ClearML Enterprise platform,
as well as the **DATAVIEWS** tab in the **All Projects** page.
The **Dataviews table** is a [customizable](#customizing-the-dataviews-table) list of Dataviews associated with a project.
Use it to view, create, and edit Dataviews in the info panel.
Use it to view and create Dataviews, and access their info panels.
The table lists independent Dataview objects. To see Dataviews logged by a task, go
to the specific task's **DATAVIEWS** tab (see [Experiment Dataviews](webapp_exp_track_visual.md)).
View the Dataviews table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
@ -84,7 +84,7 @@ The same information can be found in the bottom menu, in a tooltip that appears
## Creating a Dataview
Create a new Dataview by clicking the **+ NEW DATAVIEW** button at the top right of the table, which open a
Create a new Dataview by clicking the **+ NEW DATAVIEW** button at the top right of the table, which opens a
**NEW DATAVIEW** window.
![New Dataview window](../../img/webapp_dataview_new.png)

View File

@ -58,7 +58,7 @@ when creating a pipeline step.
### Pipeline Step Caching
The Pipeline controller also offers step caching, meaning, reusing outputs of previously executed pipeline steps, in the
case of exact same step code, and the same step input values. By default, pipeline steps are not cached. Enable caching
case of exact same step code, and the same step input values. By default, pipeline steps are not cached. Enable caching
when creating a pipeline step.
When a step is cached, the step code is hashed, alongside the steps parameters (as passed in runtime), into a single

View File

@ -38,7 +38,7 @@ def main(pickle_url, mock_parameter='mock'):
the following format: `{'section_name':['param_name']]}`. For example, the pipeline in the code above will store the
`pickle_url` parameter in the `General` section and `mock_parameter` in the `Mock` section. By default, arguments will
be stored in the `Args` section.
* `pool_frequency` - The pooling frequency (in minutes) for monitoring experiments / states.
* `pool_frequency` - The polling frequency (in minutes) for monitoring experiments / states.
* `add_pipeline_tags` - If `True`, add `pipe: <pipeline_task_id>` tag to all steps (Tasks) created by this pipeline
(this is useful to create better visibility in projects with multiple pipelines, and for easy selection) (default:
`False`).
@ -111,11 +111,11 @@ def step_one(pickle_data_url: str, extra: int = 43):
Example, assuming we have two functions, `parse_data()` and `load_data()`: `[parse_data, load_data]`
* `parents` Optional list of parent steps in the pipeline. The current step in the pipeline will be sent for execution only after all the parent steps have been executed successfully.
Additionally, you can enable automatic logging of a steps metrics / artifacts / models to the pipeline task using the
Additionally, you can enable automatic logging of a steps metrics / artifacts / models to the pipeline task using the
following arguments:
* `monitor_metrics` (Optional) - Automatically log the step's reported metrics also on the pipeline Task. The expected
format is one of the following:
* List of pairs metric (title, series) to log: [(step_metric_title, step_metric_series), ]. Example: `[('test', 'accuracy'), ]`
* List of pairs metric (title, series) to log: [(step_metric_title, step_metric_series), ]. Example: `[('test', 'accuracy'), ]`
* List of tuple pairs, to specify a different target metric to use on the pipeline Task: [((step_metric_title, step_metric_series), (target_metric_title, target_metric_series)), ].
Example: `[[('test', 'accuracy'), ('model', 'accuracy')], ]`
* `monitor_artifacts` (Optional) - Automatically log the step's artifacts on the pipeline Task.

View File

@ -187,7 +187,7 @@ def step_completed_callback(
#### Models, Artifacts, and Metrics
You can enable automatic logging of a steps metrics /artifacts / models to the pipeline task using the following arguments:
You can enable automatic logging of a steps metrics /artifacts / models to the pipeline task using the following arguments:
* `monitor_metrics` (Optional) - Automatically log the step's reported metrics also on the pipeline Task. The expected
format is one of the following:

View File

@ -10,7 +10,7 @@ This release is not backwards compatible
**Breaking Changes**
* `preprocess` and `postprocess` class functions get 3 arguments
* Add support for per-request state storage, passing information between the pre/post processing functions
* Add support for per-request state storage, passing information between the pre/post-processing functions
**Features & Bug Fixes**

View File

@ -239,7 +239,7 @@ This release is not backwards compatible - see notes below on upgrading
- Add support for uploading artifacts with a list of files using `Task.upload_artifcats(name, [Path(), Path()])`
- Add missing *clearml-task* parameters `--docker_args`, `--docker_bash_setup_script` and `--output-uri`
- Change `CreateAndPopulate` will auto list packages imported but not installed locally
- Add `clearml.task.populate.create_task_from_function()` to create a Task from a function, wrapping function input arguments into hyper-parameter section as kwargs and storing function results as named artifacts
- Add `clearml.task.populate.create_task_from_function()` to create a Task from a function, wrapping function input arguments into hyper-parameter section as kwargs and storing function results as named artifacts
- Add support for Task serialization (e.g. for pickle)
- Add `Task.get_configuration_object_as_dict()`
- Add `docker_image` argument to `Task.set_base_docker()` (deprecate `docker_cmd`)
@ -367,7 +367,7 @@ ClearML k8s glue default pod label was changed to `CLEARML=agent` (instead of `T
**Bug Fixes**
- Fix experiment details UI failure opening hyperparameter sections beginning with `#` [ClearML Server GitHub issue #79](https://github.com/allegroai/clearml-server/issues/79)
- Fix performance issues with UI comparison of large experiments [Slack Channel](https://clearml.slack.com/archives/CTK20V944/p1621698235159800)
- Fix performance issues with UI comparison of large experiments [Slack Channel](https://clearml.slack.com/archives/CTK20V944/p1621698235159800)
- Fix filtering on hyperparameters [ClearML GitHub issue #385](https://github.com/allegroai/clearml/issues/385) [Slack Channel](https://clearml.slack.com/archives/CTK20V944/p1626600582284700)
- Fix profile page user options toggle control area of effect
- Fix browser resizing affecting plot zoom

View File

@ -89,7 +89,7 @@ title: Version 1.6
* Fix listed models in UI pipeline run info panel doesn't link to model
* Fix "Load more" button disappears from UI experiment page
* Fix breadcrumb link to parent project does not navigate to the parent's project page
* Fix spaces deleted while typing query in UI search bars
* Fix spaces deleted while typing query in UI search bars
* Fix UI plots not loading in experiments
* Fix UI experiment debug sample full screen failing to display multiple metrics
* Fix using search in UI tables removes custom columns

View File

@ -148,7 +148,7 @@ configuration [here](#aws-iam-restricted-access-policy).
1. Complete creating the policy
1. Attach the created policy to an IAM user/group whose credentials will be used in the autoscaler app (you can create a
new IAM user/group for this purpose)
1. Obtain a set of AWS IAM credentials for the user/group to which you have attached the created policy in the previous step
1. Obtain a set of AWS IAM credentials for the user/group to which you have attached the created policy in the previous step
### AWS IAM Restricted Access Policy

View File

@ -6,7 +6,7 @@ title: GCP Autoscaler
The ClearML GCP Autoscaler App is available under the ClearML Pro plan
:::
The GCP Autoscaler Application optimizes GCP VM instance usage according to a user defined instance budget: Define your
The GCP Autoscaler Application optimizes GCP VM instance usage according to a user defined instance budget: Define your
budget by specifying the type and amount of available compute resources.
Each resource type is associated with a ClearML [queue](../../fundamentals/agents_and_queues.md#what-is-a-queue) whose

View File

@ -16,7 +16,7 @@ ClearML provides the following applications:
* [**GPU Compute**](apps_gpu_compute.md) - Launch cloud machines on demand and optimize their usage according to a
defined budget--no previous setup necessary
* [**AWS Autoscaler**](apps_aws_autoscaler.md) - Optimize AWS EC2 instance usage according to a defined instance budget
* [**GCP Autoscaler**](apps_gcp_autoscaler.md) - Optimize GCP instance usage according to a defined instance budget
* [**GCP Autoscaler**](apps_gcp_autoscaler.md) - Optimize GCP instance usage according to a defined instance budget
* [**Hyperparameter Optimization**](apps_hpo.md) - Find the parameter values that yield the best performing models
* **Nvidia Clara** - Train models using Nvidias Clara framework
* [**Project Dashboard**](apps_dashboard.md) - High-level project monitoring with Slack alerts

View File

@ -132,7 +132,7 @@ The following table describes the actions that can be done from the experiments
that allow each operation.
Access these actions with the context menu in any of the following ways:
* In the experiments table, right-click an experiment or hover over an experiment and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In the experiments table, right-click an experiment or hover over an experiment and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
* In an experiment info panel, click the menu button <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bar menu" className="icon size-md space-sm" />
| Action | Description | States Valid for the Action | State Transition |

View File

@ -6,7 +6,7 @@ Use the Projects Page for project navigation and management.
Your projects are displayed like folders: click a folder to access its contents. The Projects Page shows the top-level
projects in your workspace. Projects that contain nested subprojects are identified by an extra nested project tab.
An exception is the **All Experiments** folder, which shows all projects and subprojects contents in a single, flat
An exception is the **All Experiments** folder, which shows all projects and subprojects contents in a single, flat
list.
![Projects page](../img/webapp_project_page.png)

View File

@ -24,7 +24,7 @@ The worker table shows the currently available workers and their current executi
Clicking on a worker will open the workers details panel and replace the graph with that workers resource utilization
information. The resource metric being monitored can be selected through the menu at the graphs top left corner:
information. The resource metric being monitored can be selected through the menu at the graphs top left corner:
* CPU and GPU Usage
* Memory Usage
* Video Memory Usage
@ -37,7 +37,7 @@ The workers details panel includes the following two tabs:
* Current Experiment - The experiment currently being executed by the worker
* Experiment Runtime - How long the currently executing experiment has been running
* Experiment iteration - The last reported training iteration for the experiment
* **QUEUES** - information about the queues that the worker is assigned to:
* **QUEUES** - Information about the queues that the worker is assigned to:
* Queue - The name of the Queue
* Next experiment - The next experiment available in this queue
* In Queue - The number of experiments currently enqueued