Update docstrings (#1289)

Edit docstrings so they can be rendered using MDX
This commit is contained in:
pollfly 2024-06-20 16:59:53 +03:00 committed by GitHub
parent 4c79e06643
commit 60d138bc56
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 121 additions and 111 deletions

View File

@ -480,7 +480,7 @@ class PipelineController(object):
The current step in the pipeline will be sent for execution only after all the parent nodes
have been executed successfully.
:param parameter_override: Optional parameter overriding dictionary.
The dict values can reference a previously executed step using the following form '${step_name}'. Examples:
The dict values can reference a previously executed step using the following form ``'${step_name}'``. Examples:
- Artifact access ``parameter_override={'Args/input_file': '${<step_name>.artifacts.<artifact_name>.url}' }``
- Model access (last model used) ``parameter_override={'Args/input_file': '${<step_name>.models.output.-1.url}' }``
@ -494,11 +494,11 @@ class PipelineController(object):
:param configuration_overrides: Optional, override Task configuration objects.
Expected dictionary of configuration object name and configuration object content.
Examples:
{'General': dict(key='value')}
{'General': 'configuration file content'}
{'OmegaConf': YAML.dumps(full_hydra_dict)}
``{'General': dict(key='value')}``
``{'General': 'configuration file content'}``
``{'OmegaConf': YAML.dumps(full_hydra_dict)}``
:param task_overrides: Optional task section overriding dictionary.
The dict values can reference a previously executed step using the following form '${step_name}'. Examples:
The dict values can reference a previously executed step using the following form ``'${step_name}'``. Examples:
- get the latest commit from a specific branch ``task_overrides={'script.version_num': '', 'script.branch': 'main'}``
- match git repository branch to a previous step ``task_overrides={'script.branch': '${stage1.script.branch}', 'script.version_num': ''}``
@ -549,7 +549,7 @@ class PipelineController(object):
the Node is skipped and so is any node in the DAG that relies on this node.
Notice the `parameters` are already parsed,
e.g. `${step1.parameters.Args/param}` is replaced with relevant value.
e.g. ``${step1.parameters.Args/param}`` is replaced with relevant value.
.. code-block:: py
@ -774,7 +774,7 @@ class PipelineController(object):
If not provided automatically take all function arguments & defaults
Optional, pass input arguments to the function from other Tasks' output artifact.
Example argument named `numpy_matrix` from Task ID `aabbcc` artifact name `answer`:
{'numpy_matrix': 'aabbcc.answer'}
``{'numpy_matrix': 'aabbcc.answer'}``
:param function_return: Provide a list of names for all the results.
If not provided, no results will be stored as artifacts.
:param project_name: Set the project name for the task. Required if base_task_id is None.
@ -842,7 +842,7 @@ class PipelineController(object):
the Node is skipped and so is any node in the DAG that relies on this node.
Notice the `parameters` are already parsed,
e.g. `${step1.parameters.Args/param}` is replaced with relevant value.
e.g. ``${step1.parameters.Args/param}`` is replaced with relevant value.
.. code-block:: py
@ -991,7 +991,7 @@ class PipelineController(object):
the Node is skipped and so is any node in the DAG that relies on this node.
Notice the `parameters` are already parsed,
e.g. `${step1.parameters.Args/param}` is replaced with relevant value.
e.g. ``${step1.parameters.Args/param}`` is replaced with relevant value.
.. code-block:: py
@ -1416,7 +1416,7 @@ class PipelineController(object):
The parameter can be used as input parameter for any step in the pipeline.
Notice all parameters will appear under the PipelineController Task's Hyper-parameters -> Pipeline section
Example: pipeline.add_parameter(name='dataset', description='dataset ID to process the pipeline')
Then in one of the steps we can refer to the value of the parameter with '${pipeline.dataset}'
Then in one of the steps we can refer to the value of the parameter with ``'${pipeline.dataset}'``
:param name: String name of the parameter.
:param default: Default value to be put as the default value (can be later changed in the UI)
@ -1445,7 +1445,7 @@ class PipelineController(object):
.. note::
A worker daemon must be listening at the queue for the worker to fetch the Task and execute it,
see `ClearML Agent <../clearml_agent>`_ in the ClearML Documentation.
see "ClearML Agent" in the ClearML Documentation.
:param pipeline_controller: The PipelineController to enqueue. Specify a PipelineController object or PipelineController ID
:param queue_name: The name of the queue. If not specified, then ``queue_id`` must be specified.
@ -1661,7 +1661,7 @@ class PipelineController(object):
the Node is skipped and so is any node in the DAG that relies on this node.
Notice the `parameters` are already parsed,
e.g. `${step1.parameters.Args/param}` is replaced with relevant value.
e.g. ``${step1.parameters.Args/param}`` is replaced with relevant value.
.. code-block:: py
@ -2127,7 +2127,7 @@ class PipelineController(object):
If not provided automatically take all function arguments & defaults
Optional, pass input arguments to the function from other Tasks's output artifact.
Example argument named `numpy_matrix` from Task ID `aabbcc` artifact name `answer`:
{'numpy_matrix': 'aabbcc.answer'}
``{'numpy_matrix': 'aabbcc.answer'}``
:param function_return: Provide a list of names for all the results.
If not provided, no results will be stored as artifacts.
:param project_name: Set the project name for the task. Required if base_task_id is None.
@ -2195,7 +2195,7 @@ class PipelineController(object):
the Node is skipped and so is any node in the DAG that relies on this node.
Notice the `parameters` are already parsed,
e.g. `${step1.parameters.Args/param}` is replaced with relevant value.
e.g. ``${step1.parameters.Args/param}`` is replaced with relevant value.
.. code-block:: py
@ -3015,7 +3015,7 @@ class PipelineController(object):
def _parse_step_ref(self, value, recursive=False):
# type: (Any) -> Optional[str]
"""
Return the step reference. For example "${step1.parameters.Args/param}"
Return the step reference. For example ``"${step1.parameters.Args/param}"``
:param value: string
:param recursive: if True, recursively parse all values in the dict, list or tuple
:return:
@ -3047,7 +3047,7 @@ class PipelineController(object):
def _parse_task_overrides(self, task_overrides):
# type: (dict) -> dict
"""
Return the step reference. For example "${step1.parameters.Args/param}"
Return the step reference. For example ``"${step1.parameters.Args/param}"``
:param task_overrides: string
:return:
"""
@ -3265,11 +3265,11 @@ class PipelineController(object):
def __verify_step_reference(self, node, step_ref_string):
# type: (PipelineController.Node, str) -> Optional[str]
"""
Verify the step reference. For example "${step1.parameters.Args/param}"
Verify the step reference. For example ``"${step1.parameters.Args/param}"``
Raise ValueError on misconfiguration
:param Node node: calling reference node (used for logging)
:param str step_ref_string: For example "${step1.parameters.Args/param}"
:param str step_ref_string: For example ``"${step1.parameters.Args/param}"``
:return: If step reference is used, return the pipeline step name, otherwise return None
"""
parts = step_ref_string[2:-1].split('.')
@ -4076,7 +4076,7 @@ class PipelineDecorator(PipelineController):
the Node is skipped and so is any node in the DAG that relies on this node.
Notice the `parameters` are already parsed,
e.g. `${step1.parameters.Args/param}` is replaced with relevant value.
e.g. ``${step1.parameters.Args/param}`` is replaced with relevant value.
.. code-block:: py

View File

@ -135,17 +135,18 @@ class OptimizerBOHB(SearchStrategy, RandomSeed):
Optimization. Instead of sampling new configurations at random,
BOHB uses kernel density estimators to select promising candidates.
.. note::
.. code-block::
For reference:
@InProceedings{falkner-icml-18,
title = {{BOHB}: Robust and Efficient Hyperparameter Optimization at Scale},
author = {Falkner, Stefan and Klein, Aaron and Hutter, Frank},
booktitle = {Proceedings of the 35th International Conference on Machine Learning},
pages = {1436--1445},
year = {2018},
title = {{BOHB}: Robust and Efficient Hyperparameter Optimization at Scale},
author = {Falkner, Stefan and Klein, Aaron and Hutter, Frank},
booktitle = {Proceedings of the 35th International Conference on Machine Learning},
pages = {1436--1445},
year = {2018},
}
:param str base_task_id: Task ID (str)
:param list hyper_parameters: list of Parameter objects to optimize over
:param Objective objective_metric: Objective metric to maximize / minimize
@ -218,18 +219,17 @@ class OptimizerBOHB(SearchStrategy, RandomSeed):
Optimization. Instead of sampling new configurations at random,
BOHB uses kernel density estimators to select promising candidates.
.. note::
.. code-block::
For reference:
@InProceedings{falkner-icml-18,
title = {{BOHB}: Robust and Efficient Hyperparameter Optimization at Scale},
author = {Falkner, Stefan and Klein, Aaron and Hutter, Frank},
booktitle = {Proceedings of the 35th International Conference on Machine Learning},
pages = {1436--1445},
year = {2018},
title = {{BOHB}: Robust and Efficient Hyperparameter Optimization at Scale},
author = {Falkner, Stefan and Klein, Aaron and Hutter, Frank},
booktitle = {Proceedings of the 35th International Conference on Machine Learning},
pages = {1436--1445},
year = {2018},
}
:param eta: float (3)
In each iteration, a complete run of sequential halving is executed. In it,
after evaluating each configuration on the same subset size, only a fraction of

View File

@ -531,13 +531,13 @@ class ClearmlJob(BaseJob):
:param str base_task_id: base task ID to clone from
:param dict parameter_override: dictionary of parameters and values to set fo the cloned task
:param dict task_overrides: Task object specific overrides.
for example {'script.version_num': None, 'script.branch': 'main'}
for example ``{'script.version_num': None, 'script.branch': 'main'}``
:param configuration_overrides: Optional, override Task configuration objects.
Expected dictionary of configuration object name and configuration object content.
Examples:
{'config_section': dict(key='value')}
{'config_file': 'configuration file content'}
{'OmegaConf': YAML.dumps(full_hydra_dict)}
``{'config_section': dict(key='value')}``
``{'config_file': 'configuration file content'}``
``{'OmegaConf': YAML.dumps(full_hydra_dict)}``
:param list tags: additional tags to add to the newly cloned task
:param str parent: Set newly created Task parent task field, default: base_tak_id.
:param dict kwargs: additional Task creation parameters

View File

@ -124,7 +124,7 @@ class Monitor(object):
Return the query parameters for the monitoring.
This should be overloaded with specific implementation query
:return dict: Example dictionary: {'status': ['failed'], 'order_by': ['-last_update']}
:return dict: Example dictionary: ``{'status': ['failed'], 'order_by': ['-last_update']}``
"""
return dict(status=['failed'], order_by=['-last_update'])

View File

@ -638,7 +638,7 @@ class SearchStrategy(object):
:param all_hyper_parameters: Default False. If True, return all the hyperparameters from all the sections.
:param only_completed: return only completed Tasks. Default False.
:return: A list of dictionaries ({task_id: '', hyper_parameters: {}, metrics: {}}), ordered by performance,
:return: A list of dictionaries ``({task_id: '', hyper_parameters: {}, metrics: {}})``, ordered by performance,
where index 0 is the best performing Task.
Example w/ all_metrics=False:
@ -1733,7 +1733,7 @@ class HyperParameterOptimizer(object):
:param all_hyper_parameters: Default False. If True, return all the hyperparameters from all the sections.
:param only_completed: return only completed Tasks. Default False.
:return: A list of dictionaries ({task_id: '', hyper_parameters: {}, metrics: {}}), ordered by performance,
:return: A list of dictionaries ``({task_id: '', hyper_parameters: {}, metrics: {}})``, ordered by performance,
where index 0 is the best performing Task.
Example w/ all_metrics=False:

View File

@ -69,7 +69,7 @@ class Parameter(RandomSeed):
"""
Return a list of all the valid values of the Parameter.
:return: List of dicts {name: value}
:return: List of dicts ``{name: value}``
"""
pass
@ -147,7 +147,7 @@ class UniformParameterRange(Parameter):
"""
Return uniformly sampled value based on object sampling definitions.
:return: {self.name: random value [self.min_value, self.max_value)}
:return: ``{self.name: random value [self.min_value, self.max_value)}``
"""
if not self.step_size:
return {self.name: self._random.uniform(self.min_value, self.max_value)}
@ -160,7 +160,7 @@ class UniformParameterRange(Parameter):
Return a list of all the valid values of the Parameter. If ``self.step_size`` is not defined, return 100 points
between min/max values.
:return: list of dicts {name: float}
:return: list of dicts ``{name: float}``
"""
step_size = self.step_size or (self.max_value - self.min_value) / 100.
steps = (self.max_value - self.min_value) / step_size
@ -208,7 +208,7 @@ class LogUniformParameterRange(UniformParameterRange):
"""
Return uniformly logarithmic sampled value based on object sampling definitions.
:return: {self.name: random value self.base^[self.min_value, self.max_value)}
:return: ``{self.name: random value self.base^[self.min_value, self.max_value)}``
"""
values_dict = super().get_value()
return {self.name: self.base**v for v in values_dict.values()}
@ -250,7 +250,7 @@ class UniformIntegerParameterRange(Parameter):
"""
Return uniformly sampled value based on object sampling definitions.
:return: {self.name: random value [self.min_value, self.max_value)}
:return: ``{self.name: random value [self.min_value, self.max_value)}``
"""
return {self.name: self._random.randrange(
start=self.min_value, step=self.step_size,
@ -262,7 +262,7 @@ class UniformIntegerParameterRange(Parameter):
Return a list of all the valid values of the Parameter. If ``self.step_size`` is not defined, return 100 points
between minmax values.
:return: list of dicts {name: int}
:return: list of dicts ``{name: int}``
"""
values = list(range(self.min_value, self.max_value, self.step_size))
if self.include_max and (not values or values[-1] < self.max_value):
@ -291,7 +291,7 @@ class DiscreteParameterRange(Parameter):
"""
Return uniformly sampled value from the valid list of values.
:return: {self.name: random entry from self.value}
:return: ``{self.name: random entry from self.value}``
"""
return {self.name: self._random.choice(self.values)}
@ -300,7 +300,7 @@ class DiscreteParameterRange(Parameter):
"""
Return a list of all the valid values of the Parameter.
:return: list of dicts {name: value}
:return: list of dicts ``{name: value}``
"""
return [{self.name: v} for v in self.values]
@ -321,15 +321,19 @@ class ParameterSet(Parameter):
.. code-block:: javascript
[ {"opt1": 10, "arg2": 20, "arg2": 30},
{"opt2": 11, "arg2": 22, "arg2": 33} ]
[
{"opt1": 10, "arg2": 20, "arg2": 30},
{"opt2": 11, "arg2": 22, "arg2": 33}
]
Two complex combination each one sampled from a different range:
.. code-block:: javascript
[ {"opt1": UniformParameterRange('arg1',0,1) , "arg2": 20},
{"opt2": UniformParameterRange('arg1',11,12), "arg2": 22},]
[
{"opt1": UniformParameterRange('arg1',0,1) , "arg2": 20},
{"opt2": UniformParameterRange('arg1',11,12), "arg2": 22},
]
"""
super(ParameterSet, self).__init__(name=None)
self.values = parameter_combinations
@ -339,7 +343,7 @@ class ParameterSet(Parameter):
"""
Return uniformly sampled value from the valid list of values.
:return: {self.name: random entry from self.value}
:return: ``{self.name: random entry from self.value}``
"""
return self._get_value(self._random.choice(self.values))
@ -348,7 +352,7 @@ class ParameterSet(Parameter):
"""
Return a list of all the valid values of the Parameter.
:return: list of dicts {name: value}
:return: list of dicts ``{name: value}``
"""
combinations = []
for combination in self.values:

View File

@ -628,9 +628,9 @@ class TaskScheduler(BaseScheduler):
then recurring based on the timing schedule arguments. Default False.
:param reuse_task: If True, re-enqueue the same Task (i.e. do not clone it) every time, default False.
:param task_parameters: Configuration parameters to the executed Task.
for example: {'Args/batch': '12'} Notice: not available when reuse_task=True
for example: ``{'Args/batch': '12'}`` Notice: not available when reuse_task=True
:param task_overrides: Change task definition.
for example {'script.version_num': None, 'script.branch': 'main'} Notice: not available when reuse_task=True
for example ``{'script.version_num': None, 'script.branch': 'main'}`` Notice: not available when reuse_task=True
:return: True if job is successfully added to the scheduling list
"""

View File

@ -223,7 +223,7 @@ class TriggerScheduler(BaseScheduler):
Notice it is recommended to give the trigger a descriptive unique name, if not provided a task ID is used.
Notice `task_overrides` can except reference to the trigger model ID:
example: task_overrides={'Args/model_id': '${model.id}'}
example: ``task_overrides={'Args/model_id': '${model.id}'}``
Notice if schedule_function is passed, use the following function interface:
.. code-block:: py
@ -251,9 +251,9 @@ class TriggerScheduler(BaseScheduler):
(skip until the next scheduled time period). Default False.
:param reuse_task: If True, re-enqueue the same Task (i.e. do not clone it) every time, default False.
:param task_parameters: Configuration parameters to the executed Task.
for example: {'Args/batch': '12'} Notice: not available when reuse_task=True
for example: ``{'Args/batch': '12'}`` Notice: not available when reuse_task=True
:param task_overrides: Change task definition.
for example {'script.version_num': None, 'script.branch': 'main'} Notice: not available when reuse_task=True
for example ``{'script.version_num': None, 'script.branch': 'main'}`` Notice: not available when reuse_task=True
:return: True if job is successfully added to the scheduling list
"""
trigger = ModelTrigger(
@ -303,7 +303,7 @@ class TriggerScheduler(BaseScheduler):
Notice, it is recommended to give the trigger a descriptive unique name. If not provided, a task ID is used.
Notice `task_overrides` can except reference to the trigger model ID:
example: task_overrides={'Args/dataset_id': '${dataset.id}'}.
example: ``task_overrides={'Args/dataset_id': '${dataset.id}'}``.
Notice if schedule_function is passed, use the following function interface:
@ -333,9 +333,9 @@ class TriggerScheduler(BaseScheduler):
(skip until the next scheduled time period). Default False.
:param reuse_task: If True, re-enqueue the same Task (i.e. do not clone it) every time, default False.
:param task_parameters: Configuration parameters to the executed Task.
For example: {'Args/batch': '12'} Notice: not available when reuse_task=True/
For example: ``{'Args/batch': '12'}``. Notice: not available when reuse_task=True/
:param task_overrides: Change task definition.
For example {'script.version_num': None, 'script.branch': 'main'}. Notice: not available when reuse_task=True
For example ``{'script.version_num': None, 'script.branch': 'main'}``. Notice: not available when reuse_task=True
:return: True if job is successfully added to the scheduling list
"""
if trigger_project:
@ -414,7 +414,7 @@ class TriggerScheduler(BaseScheduler):
Notice it is recommended to give the trigger a descriptive unique name, if not provided a task ID is used.
Notice `task_overrides` can except reference to the trigger model ID:
example: task_overrides={'Args/task_id': '${task.id}'}
example: ``task_overrides={'Args/task_id': '${task.id}'}``
Notice if schedule_function is passed, use the following function interface:
.. code-block:: py
@ -448,9 +448,9 @@ class TriggerScheduler(BaseScheduler):
(skip until the next scheduled time period). Default False.
:param reuse_task: If True, re-enqueue the same Task (i.e. do not clone it) every time, default False.
:param task_parameters: Configuration parameters to the executed Task.
for example: {'Args/batch': '12'} Notice: not available when reuse_task=True/
for example: ``{'Args/batch': '12'}`` Notice: not available when reuse_task=True/
:param task_overrides: Change task definition.
for example {'script.version_num': None, 'script.branch': 'main'} Notice: not available when reuse_task=True
for example ``{'script.version_num': None, 'script.branch': 'main'}``. Notice: not available when reuse_task=True
:return: True if job is successfully added to the scheduling list
"""
trigger = TaskTrigger(

View File

@ -7459,7 +7459,7 @@ class GetAllRequest(Request):
:param parent: Parent ID
:type parent: str
:param status_changed: List of status changed constraint strings (utcformat,
epoch) with an optional prefix modifier (>, >=, <, <=)
epoch) with an optional prefix modifier (\>,\>=, \<, \<=)
:type status_changed: Sequence[str]
:param search_text: Free text search query
:type search_text: str
@ -7565,7 +7565,7 @@ class GetAllRequest(Request):
"type": ["array", "null"],
},
"status_changed": {
"description": "List of status changed constraint strings (utcformat, epoch) with an optional prefix modifier (>, >=, <, <=)",
"description": "List of status changed constraint strings (utcformat, epoch) with an optional prefix modifier (\>,\>=, \<, \<=)",
"items": {"pattern": "^(>=|>|<=|<)?.*$", "type": "string"},
"type": ["array", "null"],
},

View File

@ -7825,7 +7825,7 @@ class GetAllRequest(Request):
:param parent: Parent ID
:type parent: str
:param status_changed: List of status changed constraint strings (utcformat,
epoch) with an optional prefix modifier (>, >=, <, <=)
epoch) with an optional prefix modifier (\>,\>=, \<, \<=)
:type status_changed: Sequence[str]
:param search_text: Free text search query
:type search_text: str
@ -7971,8 +7971,8 @@ class GetAllRequest(Request):
},
"status_changed": {
"description": (
"List of status changed constraint strings (utcformat, epoch) with an optional prefix modifier (>,"
" >=, <, <=)"
"List of status changed constraint strings (utcformat, epoch) with an optional prefix modifier (\>,"
" \>=, \<, \<=)"
),
"items": {"pattern": "^(>=|>|<=|<)?.*$", "type": "string"},
"type": ["array", "null"],

View File

@ -2179,7 +2179,7 @@ class GetAllRequest(Request):
:param uri: List of model URIs
:type uri: Sequence[str]
:param last_update: List of last_update constraint strings (utcformat, epoch)
with an optional prefix modifier (>, >=, <, <=)
with an optional prefix modifier (\>,\>=, \<, \<=)
:type last_update: Sequence[str]
:param _all_: Multi-field pattern condition (all fields match pattern)
:type _all_: MultiFieldPatternData
@ -2243,7 +2243,7 @@ class GetAllRequest(Request):
"last_update": {
"description": (
"List of last_update constraint strings, or a single string (utcformat, epoch) with an optional prefix "
"modifier (>, >=, <, <=)"
"modifier (\>,\>=, \<, \<=)"
),
"items": {"pattern": "^(>=|>|<=|<)?.*$", "type": "string"},
"type": ["string", "array", "null"],

View File

@ -8070,7 +8070,7 @@ class GetAllRequest(Request):
:param parent: Parent ID
:type parent: str
:param status_changed: List of status changed constraint strings (utcformat,
epoch) with an optional prefix modifier (>, >=, <, <=)
with an optional prefix modifier (\>,\>=, \<, \<=)
:type status_changed: Sequence[str]
:param search_text: Free text search query
:type search_text: str
@ -8219,7 +8219,7 @@ class GetAllRequest(Request):
"status_changed": {
"description": (
"List of status changed constraint strings, or a single string (utcformat, epoch) with an optional prefix modifier "
"(>, >=, <, <=)"
"(\>, \>=, \<, \<=)"
),
"items": {"pattern": "^(>=|>|<=|<)?.*$", "type": "string"},
"type": ["string", "array", "null"],

View File

@ -5371,7 +5371,7 @@ class GetAllRequest(Request):
:param parent: Parent ID
:type parent: str
:param status_changed: List of status changed constraint strings (utcformat,
epoch) with an optional prefix modifier (>, >=, <, <=)
epoch) with an optional prefix modifier (\>,\>=, \<, \<=)
:type status_changed: Sequence[str]
:param search_text: Free text search query
:type search_text: str
@ -5477,7 +5477,7 @@ class GetAllRequest(Request):
'type': ['array', 'null'],
},
'status_changed': {
'description': 'List of status changed constraint strings (utcformat, epoch) with an optional prefix modifier (>, >=, <, <=)',
'description': 'List of status changed constraint strings (utcformat, epoch) with an optional prefix modifier (\>,\>=, \<, \<=)',
'items': {'pattern': '^(>=|>|<=|<)?.*$', 'type': 'string'},
'type': ['array', 'null'],
},

View File

@ -93,6 +93,9 @@ class Task(IdObjectBase, AccessMixin, SetupUploadMixin):
def __eq__(self, other):
return str(self) == str(other)
def __repr__(self):
return f"TaskTypes.{self.value}"
training = 'training'
testing = 'testing'
inference = "inference"
@ -112,6 +115,9 @@ class Task(IdObjectBase, AccessMixin, SetupUploadMixin):
def __eq__(self, other):
return str(self) == str(other)
def __repr__(self):
return f"TaskTypes.{self.value}"
created = "created"
queued = "queued"
in_progress = "in_progress"
@ -1119,10 +1125,10 @@ class Task(IdObjectBase, AccessMixin, SetupUploadMixin):
Get the parameters for a Task. This method returns a complete group of key-value parameter pairs, but does not
support parameter descriptions (the result is a dictionary of key-value pairs).
Notice the returned parameter dict is flat:
i.e. {'Args/param': 'value'} is the argument "param" from section "Args"
i.e. ``{'Args/param': 'value'}`` is the argument "param" from section "Args"
:param backwards_compatibility: If True (default), parameters without section name
(API version < 2.9, clearml-server < 0.16) will be at dict root level.
(API version ``<2.9``, clearml-server ``<0.16``) will be at dict root level.
If False, parameters without section name, will be nested under "Args/" key.
:param cast: If True, cast the parameter to the original type. Default False,
values are returned in their string representation
@ -1158,7 +1164,7 @@ class Task(IdObjectBase, AccessMixin, SetupUploadMixin):
Set the parameters for a Task. This method sets a complete group of key-value parameter pairs, but does not
support parameter descriptions (the input is a dictionary of key-value pairs).
Notice the parameter dict is flat:
i.e. {'Args/param': 'value'} will set the argument "param" in section "Args" to "value"
i.e. ``{'Args/param': 'value'}`` will set the argument "param" in section "Args" to "value"
:param args: Positional arguments, which are one or more dictionaries or (key, value) iterable. They are
merged into a single key-value pair dictionary.
@ -1380,7 +1386,7 @@ class Task(IdObjectBase, AccessMixin, SetupUploadMixin):
Update the parameters for a Task. This method updates a complete group of key-value parameter pairs, but does
not support parameter descriptions (the input is a dictionary of key-value pairs).
Notice the parameter dict is flat:
i.e. {'Args/param': 'value'} will set the argument "param" in section "Args" to "value"
i.e. ``{'Args/param': 'value'}`` will set the argument "param" in section "Args" to "value"
:param args: Positional arguments, which are one or more dictionaries or (key, value) iterable. They are
merged into a single key-value pair dictionary.

View File

@ -1917,7 +1917,7 @@ class Dataset(object):
If False, don't search inside subprojects (except for the special `.datasets` subproject)
:param include_archived: If True, include archived datasets as well.
:return: List of dictionaries with dataset information
Example: [{'name': name, 'project': project name, 'id': dataset_id, 'created': date_created},]
Example: ``[{'name': name, 'project': project name, 'id': dataset_id, 'created': date_created},]``
"""
# if include_archived is False, we need to add the system tag __$not:archived to filter out archived datasets
if not include_archived:

View File

@ -43,8 +43,8 @@ class Logger(object):
methods include scalar plots, line plots, histograms, confusion matrices, 2D and 3D scatter
diagrams, text logging, tables, and image uploading and reporting.
In the **ClearML Web-App (UI)**, ``Logger`` output appears in the **RESULTS** tab, **CONSOLE**, **SCALARS**,
**PLOTS**, and **DEBUG SAMPLES** sub-tabs. When you compare experiments, ``Logger`` output appears in the
In the ClearML Web-App (UI), ``Logger`` output appears in CONSOLE, SCALARS,
PLOTS, and DEBUG SAMPLES tabs. When you compare experiments, ``Logger`` output appears in the
comparisons.
.. warning::
@ -52,7 +52,7 @@ class Logger(object):
Do not construct Logger objects directly.
You must get a Logger object before calling any of the other ``Logger`` class methods by calling
:meth:`.Task.get_logger` or :meth:`Logger.current_logger`.
``Task.get_logger`` or ``Logger.current_logger``.
"""
@ -232,7 +232,7 @@ class Logger(object):
:param mode: Multiple histograms mode, stack / group / relative. Default is 'group'.
:param extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/layout/
example: extra_layout={'showlegend': False, 'plot_bgcolor': 'yellow'}
example: ``extra_layout={'showlegend': False, 'plot_bgcolor': 'yellow'}``
"""
warnings.warn(
":meth:`Logger.report_vector` is deprecated; use :meth:`Logger.report_histogram` instead.",
@ -294,10 +294,10 @@ class Logger(object):
:param mode: Multiple histograms mode, stack / group / relative. Default is 'group'.
:param data_args: optional dictionary for data configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/bar/
example: data_args={'orientation': 'h', 'marker': {'color': 'blue'}}
example: ``data_args={'orientation': 'h', 'marker': {'color': 'blue'}}``
:param extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/bar/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
if not isinstance(values, np.ndarray):
@ -465,7 +465,7 @@ class Logger(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/scatter/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
.. note::
This method is the same as :meth:`Logger.report_scatter2d` with :param:`mode='lines'`.
@ -550,7 +550,7 @@ class Logger(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/scatter/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
if not isinstance(scatter, np.ndarray):
@ -620,7 +620,7 @@ class Logger(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/scatter3d/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
# check if multiple series
multi_series = isinstance(scatter, list) and (
@ -694,7 +694,7 @@ class Logger(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/heatmap/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
if not isinstance(matrix, np.ndarray):
@ -752,7 +752,7 @@ class Logger(object):
:param bool yaxis_reversed: If False, 0,0 is in the bottom left corner. If True, 0,0 is in the top left corner
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/heatmap/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
warnings.warn(
":meth:`Logger.report_matrix` is deprecated;" "use :meth:`Logger.report_confusion_matrix` instead.",
@ -813,7 +813,7 @@ class Logger(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/surface/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
if not isinstance(matrix, np.ndarray):

View File

@ -504,10 +504,10 @@ class BaseModel(object):
:param mode: Multiple histograms mode, stack / group / relative. Default is 'group'.
:param data_args: optional dictionary for data configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/bar/
example: data_args={'orientation': 'h', 'marker': {'color': 'blue'}}
example: ``data_args={'orientation': 'h', 'marker': {'color': 'blue'}}``
:param extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/bar/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
self._init_reporter()
@ -565,7 +565,7 @@ class BaseModel(object):
:param mode: Multiple histograms mode, stack / group / relative. Default is 'group'.
:param extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/layout/
example: extra_layout={'showlegend': False, 'plot_bgcolor': 'yellow'}
example: ``extra_layout={'showlegend': False, 'plot_bgcolor': 'yellow'}``
"""
self._init_reporter()
return self.report_histogram(
@ -619,7 +619,7 @@ class BaseModel(object):
:param url: A URL to the location of csv file.
:param extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/layout/
example: extra_layout={'height': 600}
example: ``extra_layout={'height': 600}``
"""
mutually_exclusive(
UsageError, _check_none=True,
@ -697,7 +697,7 @@ class BaseModel(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/scatter/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
self._init_reporter()
@ -770,7 +770,7 @@ class BaseModel(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/scatter/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
self._init_reporter()
@ -838,7 +838,7 @@ class BaseModel(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/scatter3d/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
self._init_reporter()
@ -917,7 +917,7 @@ class BaseModel(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/heatmap/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
self._init_reporter()
@ -968,7 +968,7 @@ class BaseModel(object):
:param bool yaxis_reversed: If False, 0,0 is at the bottom left corner. If True, 0,0 is at the top left corner
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/heatmap/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
self._init_reporter()
return self.report_confusion_matrix(
@ -1025,7 +1025,7 @@ class BaseModel(object):
:param str comment: A comment displayed with the plot, underneath the title.
:param dict extra_layout: optional dictionary for layout configuration, passed directly to plotly
See full details on the supported configuration: https://plotly.com/javascript/reference/surface/
example: extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}
example: ``extra_layout={'xaxis': {'type': 'date', 'range': ['2020-01-01', '2020-01-31']}}``
"""
self._init_reporter()

View File

@ -336,7 +336,7 @@ class Task(_Task):
files_server will be used for model storage. In the default location, ClearML creates a subfolder for the
output. If set to False, local runs will not upload output models and artifacts,
and remote runs will not use any default values provided using ``default_output_uri``.
The subfolder structure is the following: `<output destination name> / <project name> / <task name>.<Task ID>`.
The subfolder structure is the following: \<output destination name\> / \<project name\> / \<task name\>.\<Task ID\>.
Note that for cloud storage, you must install the **ClearML** package for your cloud storage type,
and then configure your storage credentials. For detailed information, see "Storage" in the ClearML
Documentation.
@ -384,9 +384,9 @@ class Task(_Task):
frameworks. The dictionary keys are frameworks and the values are booleans, other dictionaries used for
finer control or wildcard strings.
In case of wildcard strings, the local path of a model file has to match at least one wildcard to be
saved/loaded by ClearML. Example: {'pytorch' : '*.pt', 'tensorflow': ['*.h5', '*']}
saved/loaded by ClearML. Example: ``{'pytorch' : '*.pt', 'tensorflow': ['*.h5', '*']}``
Keys missing from the dictionary default to ``True``, and an empty dictionary defaults to ``False``.
Supported keys for finer control: {'tensorboard': {'report_hparams': bool}} # whether to report TensorBoard hyperparameters
Supported keys for finer control: ``{'tensorboard': {'report_hparams': bool}}`` # whether to report TensorBoard hyperparameters
For example:
@ -887,7 +887,7 @@ class Task(_Task):
direct path to a file inside the local repository itself, for example: '~/project/source/train.py'
:param working_directory: Working directory to launch the script from. Default: repository root folder.
Relative to repo root or local folder.
:param packages: Manually specify a list of required packages. Example: ["tqdm>=2.1", "scikit-learn"]
:param packages: Manually specify a list of required packages. Example: ``["tqdm>=2.1", "scikit-learn"]``
or `True` to automatically create requirements
based on locally installed packages (repository must be local).
:param requirements_file: Specify requirements.txt file to install when setting the session.
@ -1370,7 +1370,7 @@ class Task(_Task):
.. note::
A worker daemon must be listening at the queue for the worker to fetch the Task and execute it,
see `ClearML Agent <../clearml_agent>`_ in the ClearML Documentation.
see "ClearML Agent" in the ClearML Documentation.
:param Task/str task: The Task to enqueue. Specify a Task object or Task ID.
:param str queue_name: The name of the queue. If not specified, then ``queue_id`` must be specified.
@ -1631,7 +1631,7 @@ class Task(_Task):
:param packages: The list of packages or the path to the requirements.txt file.
Example: ["tqdm>=2.1", "scikit-learn"] or "./requirements.txt" or ""
Example: ``["tqdm>=2.1", "scikit-learn"]`` or ``"./requirements.txt"`` or ``""``
Use an empty string (packages="") to clear the requirements section (remote execution will use
requirements.txt from the git repository if the file exists)
"""
@ -1657,7 +1657,7 @@ class Task(_Task):
Supports both git repo url link, and local repository path (automatically converted into the remote
git/commit as is currently checkout).
Example remote url: "https://github.com/user/repo.git".
Example local repo copy: "./repo" -> will automatically store the remote
Example local repo copy: "./repo" - will automatically store the remote
repo url and commit ID based on the locally cloned copy.
When executing remotely, this call will not override the repository data (it is ignored)
@ -2470,7 +2470,7 @@ class Task(_Task):
def get_models(self):
# type: () -> Mapping[str, Sequence[Model]]
"""
Return a dictionary with {'input': [], 'output': []} loaded/stored models of the current Task
Return a dictionary with ``{'input': [], 'output': []}`` loaded/stored models of the current Task
Input models are files loaded in the task, either manually or automatically logged
Output models are files stored in the task, either manually or automatically logged.
Automatically logged frameworks are for example: TensorFlow, Keras, PyTorch, ScikitLearn(joblib) etc.
@ -3106,7 +3106,7 @@ class Task(_Task):
Defaults to ('failed').
:param check_interval_sec: Interval in seconds between two checks. Defaults to 60 seconds.
:raise: RuntimeError if the status is one of {raise_on_status}.
:raise: RuntimeError if the status is one of ``{raise_on_status}``.
"""
stopped_status = list(status) + (list(raise_on_status) if raise_on_status else [])
while self.status not in stopped_status: