Change terminology (#1028)

This commit is contained in:
pollfly
2025-02-06 17:31:11 +02:00
committed by GitHub
parent 30805e474d
commit b12b71d835
158 changed files with 857 additions and 855 deletions

View File

@@ -19,9 +19,9 @@ After executing either of these scripts, you can view your DatasetVersion conten
The [dataview_example_framegroup.py](https://github.com/allegroai/clearml/blob/master/examples/hyperdatasets/data-ingestion/dataview_example_framegroup.py)
and [dataview_example_singleframe.py](https://github.com/allegroai/clearml/blob/master/examples/hyperdatasets/data-ingestion/dataview_example_singleframe.py)
examples demonstrate how to use a [DataView](dataviews.md) to retrieve your data as SingleFrames and FrameGroups as
part of a running experiment. This is done by creating a DataView query and then retrieving the corresponding frames.
part of a running task. This is done by creating a DataView query and then retrieving the corresponding frames.
DataView details are displayed in the UI in an experiment's **DATAVIEWS** tab.
DataView details are displayed in the UI in a task's **DATAVIEWS** tab.
### Data Ingestion

View File

@@ -6,7 +6,7 @@ ClearML Enterprise's **Datasets** and **Dataset versions** provide the internal
and functionality for the following purposes:
* Connecting source data to the ClearML Enterprise platform
* Using ClearML Enterprise's Git-like [Dataset versioning](#dataset-versioning)
* Integrating the powerful features of [Dataviews](dataviews.md) with an experiment
* Integrating the powerful features of [Dataviews](dataviews.md) with a task
* [Annotating](webapp/webapp_datasets_frames.md#annotations) images and videos
Datasets consist of versions with SingleFrames and/or FrameGroups. Each Dataset can contain multiple versions, which
@@ -128,7 +128,7 @@ Use the [`Dataset.delete`](../references/hyperdataset/hyperdataset.md#datasetdel
### Tagging Datasets
Tags can be added to datasets, allowing to easily identify and group experiments.
Tags can be added to datasets, allowing to easily identify and group tasks.
Add tags to a dataset:
```python
@@ -159,7 +159,7 @@ Dataset versions can have either *Draft* or *Published* state.
A *Draft* version is editable, so frames can be added to and deleted and/or modified.
A *Published* version is read-only, which ensures reproducible experiments and preserves the Dataset version contents.
A *Published* version is read-only, which ensures reproducible tasks and preserves the Dataset version contents.
Child versions can only be created from *Published* versions, as they inherit their predecessor version contents.
## Dataset Version Structure

View File

@@ -14,19 +14,19 @@ Dataviews support:
* Class label enumeration
* Controls for the frame iteration, such as sequential or random iteration, limited or infinite iteration, and reproducibility.
Dataviews are lazy and optimize processing. When an experiment script runs in a local environment, Dataview pointers
are initialized. If the experiment is cloned or extended, and that newly cloned or extended experiment is tuned and run,
Dataviews are lazy and optimize processing. When a task script runs in a local environment, Dataview pointers
are initialized. If the task is cloned or extended, and that newly cloned or extended task is tuned and run,
only changed pointers are initialized. The pointers that did not change are reused.
## Dataview State
Dataviews can be in either *Draft* or *Published* state.
A *Draft* Dataview is editable. A *Published* Dataview is read-only, which ensures reproducible experiments and
A *Draft* Dataview is editable. A *Published* Dataview is read-only, which ensures reproducible tasks and
preserves the Dataview's settings.
## Filtering
A Dataview filters experiment input data, using one or more frame filters. A frame filter defines the criteria for the
A Dataview filters task input data, using one or more frame filters. A frame filter defines the criteria for the
selection of SingleFrames iterated by a Dataview.
A frame filter contains the following criteria:
@@ -92,11 +92,11 @@ may repeat. The settings include the following:
the maximum, then the actual number of SingleFrames are iterated. If the order is sequential, then no SingleFrames
repeat. If the order is random, then some SingleFrames may repeat.
* Infinite Iterations - Iterate SingleFrames until the experiment is manually terminated. If the order is sequential,
then all SingleFrames are iterated (unless the experiment is manually terminated before all iterate) and SingleFrames
* Infinite Iterations - Iterate SingleFrames until the task is manually terminated. If the order is sequential,
then all SingleFrames are iterated (unless the task is manually terminated before all iterate) and SingleFrames
repeat. If the order is random, then all SingleFrames may not be iterated, and some SingleFrames may repeat.
* Random Seed - If the experiment is rerun and the seed remains unchanged, the SingleFrames iteration is the same.
* Random Seed - If the task is rerun and the seed remains unchanged, the SingleFrames iteration is the same.
* Clip Length - For video data sources, in the number of sequential SingleFrames from a clip to iterate.

View File

@@ -11,4 +11,4 @@ Two types of frames are supported:
**SingleFrames** and **FrameGroups** contain data sources, metadata, and other data. A Frame can be added to [Datasets](dataset.md)
and then modified or removed. [Versions](dataset.md#dataset-versioning) of the Datasets can be created, which enables
documenting changes and reproducing data for experiments.
documenting changes and reproducing data for tasks.

View File

@@ -32,7 +32,7 @@ These components interact in a way that enables revising data and tracking and a
Frames are the basic units of data in ClearML Enterprise. SingleFrames and FrameGroups make up a Dataset version.
Dataset versions can be created, modified, and removed. The different versions are recorded and available,
so experiments, and their data are reproducible and traceable.
so tasks, and their data are reproducible and traceable.
Lastly, Dataviews manage views of the dataset with queries, so the input data to an experiment can be defined from a
Lastly, Dataviews manage views of the dataset with queries, so a task's input data can be defined from a
subset of a Dataset or combinations of Datasets.

View File

@@ -53,7 +53,7 @@ Sort the annotation tasks by either using **RECENT** or **NAME** option.
1. In **ITERATION**, in the **ORDER** list, choose either:
* **Sequential** - Frames are sorted by the frame top-level `context_id` (primary sort key) and `timestamp` (secondary sort key) metadata key values, and returned by the iterator in the sorted order.
* **Random** - Frames are randomly returned using the value of the `random_seed` argument. The random seed is maintained with the experiments. Therefore, the random order is reproducible if the experiment is rerun.
* **Random** - Frames are randomly returned using the value of the `random_seed` argument. The random seed is maintained with the tasks. Therefore, the random order is reproducible if the task is rerun.
1. In **REPETITION**, choose either **Use Each Frame Once** or **Limit Frames**. If you select **Limit Frames**, then in **Use Max. Frames**, type the number of frames to annotate.
1. If iterating randomly, in **RANDOM SEED** type your seed or leave blank, and the ClearML Enterprise platform generates a seed for you.

View File

@@ -6,7 +6,7 @@ The **Dataviews table** is a [customizable](#customizing-the-dataviews-table) li
Use it to view and create Dataviews, and access their info panels.
The table lists independent Dataview objects. To see Dataviews logged by a task, go
to the specific task's **DATAVIEWS** tab (see [Experiment Dataviews](webapp_exp_track_visual.md)).
to the specific task's **DATAVIEWS** tab (see [Task Dataviews](webapp_exp_track_visual.md)).
View the Dataviews table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
@@ -54,7 +54,7 @@ Customize the table using any of the following:
dot on its top right (<img src="/docs/latest/icons/ico-filter-on.svg" alt="Filter on" className="icon size-md" />). To
clear all active filters, click <img src="/docs/latest/icons/ico-filter-reset.svg" alt="Clear filters" className="icon size-md" />
in the top right corner of the table.
* Sort columns - By experiment name and/or elapsed time since creation.
* Sort columns - By task name and/or elapsed time since creation.
:::note
The following Dataviews-table customizations are saved on a **per-project** basis:

View File

@@ -3,30 +3,30 @@ title: Comparing Dataviews
---
In addition to [ClearML's comparison features](../../webapp/webapp_exp_comparing.md), the ClearML Enterprise WebApp
supports comparing input data selection criteria of experiment [Dataviews](../dataviews.md), enabling to easily locate, visualize, and analyze differences.
supports comparing input data selection criteria of task [Dataviews](../dataviews.md), enabling to easily locate, visualize, and analyze differences.
## Selecting Experiments
## Selecting Tasks
To select experiments to compare:
1. Go to an experiments table that includes the experiments to be compared.
1. Select the experiments to compare. Once multiple experiments are selected, the batch action bar appears.
To select tasks to compare:
1. Go to a task table that includes the tasks to be compared.
1. Select the tasks to compare. Once multiple tasks are selected, the batch action bar appears.
1. In the batch action bar, click **COMPARE**.
The comparison page opens in the **DETAILS** tab, showing a column for each experiment.
The comparison page opens in the **DETAILS** tab, showing a column for each task.
## Dataviews
In the **Details** tab, you can view differences in the experiments' nominal values. Each experiment's information is
In the **Details** tab, you can view differences in the tasks' nominal values. Each task's information is
displayed in a column, so each field is lined up side-by-side. Expand the **DATAVIEWS**
section to view all the Dataview fields side-by-side (filters, iterations, label enumeration, etc.). The differences between the
experiments are highlighted. Obscure identical fields by switching on the `Hide Identical Fields` toggle.
tasks are highlighted. Obscure identical fields by switching on the `Hide Identical Fields` toggle.
The experiment on the left is used as the base experiment, to which the other experiments are compared. You can set a
new base experiment
The task on the left is used as the base task, to which the other tasks are compared. You can set a
new base task
in one of the following ways:
* Hover and click <img src="/docs/latest/icons/ico-switch-base.svg" alt="Switch base experiment" className="icon size-md space-sm" />
on the experiment that will be the new base.
* Hover and click <img src="/docs/latest/icons/ico-pan.svg" alt="Pan icon" className="icon size-md space-sm" /> on the new base experiment and drag it all the way to the left
* Hover and click <img src="/docs/latest/icons/ico-switch-base.svg" alt="Switch base task" className="icon size-md space-sm" />
on the task that will be the new base.
* Hover and click <img src="/docs/latest/icons/ico-pan.svg" alt="Pan icon" className="icon size-md space-sm" /> on the new base task and drag it all the way to the left
![Dataview comparison](../../img/hyperdatasets/web-app/compare_dataviews.png)

View File

@@ -2,8 +2,8 @@
title: Modifying Dataviews
---
An experiment that has been executed can be [cloned](../../webapp/webapp_exp_reproducing.md), then the cloned experiment's
execution details can be modified, and the modified experiment can be executed.
A task that has been executed can be [cloned](../../webapp/webapp_exp_reproducing.md), then the cloned task's
execution details can be modified, and the modified task can be executed.
In addition to all the [ClearML tuning capabilities](../../webapp/webapp_exp_tuning.md), the **ClearML Enterprise WebApp** (UI)
enables modifying [Dataviews](webapp_dataviews.md), including:
@@ -23,7 +23,7 @@ enables modifying [Dataviews](webapp_dataviews.md), including:
* Click **+** and then follow the instructions below to select Hyper-Dataset versions, filter frames, map labels (label translation),
and set label enumeration and iteration controls.
* Select a different Dataview already associated with the experiment.
* Select a different Dataview already associated with the task.
* In the **SELECTED DATAVIEW** list, choose a Dataview.
@@ -60,7 +60,7 @@ by the Dataview.
## Filtering Frames
Filtering of SingleFrames iterated by a Dataview for input to the experiment is accomplished by frame filters.
Filtering of SingleFrames iterated by a Dataview for input to the task is accomplished by frame filters.
For more detailed information, see [Filtering](../dataviews.md#filtering).
**To modify frame filtering:**
@@ -141,7 +141,7 @@ For more detailed information, see [Iteration Control](../dataviews.md#iteration
* **Infinite Iterations**
1. Select the **RANDOM SEED** - If the experiment is rerun and the seed remains unchanged, the frame iteration is the same.
1. Select the **RANDOM SEED** - If the task is rerun and the seed remains unchanged, the frame iteration is the same.
1. For video, enter a **CLIP LENGTH** - For video data sources, in the number of sequential frames from a clip to iterate.

View File

@@ -1,16 +1,16 @@
---
title: Experiment Dataviews
title: Task Dataviews
---
While an experiment is running, and any time after it finishes, results are tracked and can be visualized in the ClearML
While a task is running, and any time after it finishes, results are tracked and can be visualized in the ClearML
Enterprise WebApp (UI).
In addition to all of ClearML's offerings, ClearML Enterprise keeps track of the Dataviews associated with an
experiment, which can be viewed and [modified](webapp_exp_modifying.md) in the WebApp.
task, which can be viewed and [modified](webapp_exp_modifying.md) in the WebApp.
## Viewing an Experiment's Dataviews
## Viewing a Task's Dataviews
In an experiment's page, go to the **DATAVIEWS** tab to view all the experiment's Dataview details, including:
In a task's page, go to the **DATAVIEWS** tab to view all the task's Dataview details, including:
* Input data [selection](#input) and [filtering](#filtering)
* ROI [mapping](#mapping) (label translation)
* [Label enumeration](#label-enumeration)
@@ -26,7 +26,7 @@ menu.
### Filtering
The **FILTERING** section lists the SingleFrame filters iterated by a Dataview, applied to the experiment data.
The **FILTERING** section lists the SingleFrame filters iterated by a Dataview, applied to the task data.
Each frame filter is composed of:
* A Dataset version to input from