diff --git a/docs/apps/clearml_param_search.md b/docs/apps/clearml_param_search.md index c399eed9..4fcc182a 100644 --- a/docs/apps/clearml_param_search.md +++ b/docs/apps/clearml_param_search.md @@ -24,28 +24,28 @@ of the optimization results in table and graph forms. |Name | Description| Optional | |---|----|---| -|`--project-name`|Name of the project in which the optimization task will be created. If the project does not exist, it is created. If unspecified, the repository name is used.|Yes| -|`--task-name`|Name of the optimization task. If unspecified, the base Python script's file name is used.|Yes| -|`--task-id`|ID of a ClearML task whose hyperparameters will be optimized. Required unless `--script` is specified.|Yes| -|`--script`|Script to run the parameter search on. Required unless `--task-id` is specified.|Yes| -|`--queue`|Queue to enqueue the experiments on.|Yes| -|`--params-search`|Parameters space for optimization. See more information in [Specifying the Parameter Space](#specifying-the-parameter-space). |No| -|`--params-override`|Additional parameters of the base task to override for this parameter search. Use the following JSON format for each parameter: `{"name": "param_name", "value": }`. Windows users, see [JSON format note](#json_note).|Yes| -|`--objective-metric-title`| Objective metric title to maximize/minimize (e.g. 'validation').|No| -|`--objective-metric-series`| Objective metric series to maximize/minimize (e.g. 'loss').|No| -|`--objective-metric-sign`| Optimization target, whether to maximize or minimize the value of the objective metric specified. Possible values: "min", "max", "min_global", "max_global". For more information, see [Optimization Objective](#optimization-objective). |No| -|`--optimizer-class`|The optimizer to use. Possible values are: OptimizerOptuna (default), OptimizerBOHB, GridSearch, RandomSearch. For more information, see [Supported Optimizers](../fundamentals/hpo.md#supported-optimizers). |No| -|`--optimization-time-limit`|The maximum time (minutes) for the optimization to run. The default is `None`, indicating no time limit.|Yes| +|`--args`| List of `=` strings to pass to the remote execution. Currently only argparse/click/hydra/fire arguments are supported. Example: `--args lr=0.003 batch_size=64`|Yes| |`--compute-time-limit`|The maximum compute time in minutes that experiment can consume. If this time limit is exceeded, all jobs are aborted.|Yes| -|`--pool-period-min`|The time between two consecutive polls (minutes).|Yes| -|`--total-max-jobs`|The total maximum jobs for the optimization process. The default value is `None` for unlimited.|Yes| -|`--min-iteration-per-job`|The minimum iterations (of the objective metric) per single job.|Yes| |`--max-iteration-per-job`|The maximum iterations (of the objective metric) per single job. When iteration maximum is exceeded, the job is aborted.|Yes| |`--max-number-of-concurrent-tasks`|The maximum number of concurrent Tasks (experiments) running at the same time|Yes| -|`--args`| List of `=` strings to pass to the remote execution. Currently only argparse/click/hydra/fire arguments are supported. Example: `--args lr=0.003 batch_size=64`|Yes| +|`--min-iteration-per-job`|The minimum iterations (of the objective metric) per single job.|Yes| |`--local`| If set, run the experiments locally. Notice that no new python environment will be created. The `--script` parameter must point to a local file entry point and all arguments must be passed with `--args`| Yes| +|`--objective-metric-series`| Objective metric series to maximize/minimize (e.g. 'loss').|No| +|`--objective-metric-sign`| Optimization target, whether to maximize or minimize the value of the objective metric specified. Possible values: "min", "max", "min_global", "max_global". For more information, see [Optimization Objective](#optimization-objective). |No| +|`--objective-metric-title`| Objective metric title to maximize/minimize (e.g. 'validation').|No| +|`--optimization-time-limit`|The maximum time (minutes) for the optimization to run. The default is `None`, indicating no time limit.|Yes| +|`--optimizer-class`|The optimizer to use. Possible values are: OptimizerOptuna (default), OptimizerBOHB, GridSearch, RandomSearch. For more information, see [Supported Optimizers](../fundamentals/hpo.md#supported-optimizers). |No| +|`--params-search`|Parameters space for optimization. See more information in [Specifying the Parameter Space](#specifying-the-parameter-space). |No| +|`--params-override`|Additional parameters of the base task to override for this parameter search. Use the following JSON format for each parameter: `{"name": "param_name", "value": }`. Windows users, see [JSON format note](#json_note).|Yes| +|`--pool-period-min`|The time between two consecutive polls (minutes).|Yes| +|`--project-name`|Name of the project in which the optimization task will be created. If the project does not exist, it is created. If unspecified, the repository name is used.|Yes| +|`--queue`|Queue to enqueue the experiments on.|Yes| |`--save-top-k-tasks-only`| Keep only the top \ performing tasks, and archive the rest of the experiments. Input `-1` to keep all tasks. Default: `10`.|Yes| +|`--script`|Script to run the parameter search on. Required unless `--task-id` is specified.|Yes| +|`--task-id`|ID of a ClearML task whose hyperparameters will be optimized. Required unless `--script` is specified.|Yes| +|`--task-name`|Name of the optimization task. If unspecified, the base Python script's file name is used.|Yes| |`--time-limit-per-job`|Maximum execution time per single job in minutes. When the time limit is exceeded, the job is aborted. Default: no time limit.|Yes| +|`--total-max-jobs`|The total maximum jobs for the optimization process. The default value is `None` for unlimited.|Yes| diff --git a/docs/apps/clearml_task.md b/docs/apps/clearml_task.md index 4eb744de..e099a214 100644 --- a/docs/apps/clearml_task.md +++ b/docs/apps/clearml_task.md @@ -55,27 +55,27 @@ errors in identifying the correct default branch. |Name | Description| Optional | |---|----|---| -| `--version` | Display the `clearml-task` utility version | Yes | -| `--project`| Set the project name for the task (required, unless using `--base-task-id`). If the named project does not exist, it is created on-the-fly | No | -| `--name` | Set a target name for the new task | No | -| `--repo` | URL of remote repository. Example: `--repo https://github.com/allegroai/clearml.git` | Yes | +| `--args` | Arguments to pass to the remote task, list of `=` strings. Currently only argparse arguments are supported | Yes | +| `--base-task-id` | Use a pre-existing task in the system, instead of a local repo / script. Essentially clones an existing task and overrides arguments / requirements | Yes | | `--branch` | Select repository branch / tag. By default, latest commit from the master branch | Yes | | `--commit` | Select commit ID to use. By default, latest commit, or local commit ID when using local repository | Yes | -| `--folder` | Execute the code from a local folder. Notice, it assumes a git repository already exists. Current state of the repo (commit ID and uncommitted changes) is logged and replicated on the remote machine | Yes | -| `--script` | Entry point script for the remote execution. When used with `--repo`, input the script's relative path inside the repository. For example: `--script source/train.py`. When used with `--folder`, it supports a direct path to a file inside the local repository itself, for example: `--script ~/project/source/train.py` | No | | `--cwd` | Working directory to launch the script from. Relative to repo root or local `--folder` | Yes | -| `--args` | Arguments to pass to the remote task, list of `=` strings. Currently only argparse arguments are supported | Yes | -| `--queue` | Select a task's execution queue. If not provided, a task is created but not launched | Yes | -| `--requirements` | Specify `requirements.txt` file to install when setting the session. By default, the` requirements.txt` from the repository will be used | Yes | -| `--packages` | Manually specify a list of required packages. Example: `--packages "tqdm>=2.1" "scikit-learn"` | Yes | | `--docker` | Select the Docker image to use in the remote task | Yes | -| `--docker_args` | Add Docker arguments. Pass a single string in the following format: `--docker_args ""`. For example: `--docker_args "-v some_dir_1:other_dir_1 -v some_dir_2:other_dir_2"` | Yes | | `--docker_bash_setup_script` | Add a bash script to be executed inside the Docker before setting up the task's environment | Yes | -| `--output-uri` | Set the task `output_uri`, upload destination for task models and artifacts | Yes | -| `--task-type` | Set the task type. Optional values: training, testing, inference, data_processing, application, monitor, controller, optimizer, service, qc, custom | Yes | -| `--skip-task-init` | If set, `Task.init()` call is not added to the entry point, and is assumed to be called within the script | Yes | -| `--base-task-id` | Use a pre-existing task in the system, instead of a local repo / script. Essentially clones an existing task and overrides arguments / requirements | Yes | +| `--docker_args` | Add Docker arguments. Pass a single string in the following format: `--docker_args ""`. For example: `--docker_args "-v some_dir_1:other_dir_1 -v some_dir_2:other_dir_2"` | Yes | +| `--folder` | Execute the code from a local folder. Notice, it assumes a git repository already exists. Current state of the repo (commit ID and uncommitted changes) is logged and replicated on the remote machine | Yes | | `--import-offline-session`| Specify the path to the offline session you want to import.| Yes | +| `--name` | Set a target name for the new task | No | +| `--output-uri` | Set the task `output_uri`, upload destination for task models and artifacts | Yes | +| `--packages` | Manually specify a list of required packages. Example: `--packages "tqdm>=2.1" "scikit-learn"` | Yes | +| `--project`| Set the project name for the task (required, unless using `--base-task-id`). If the named project does not exist, it is created on-the-fly | No | +| `--queue` | Select a task's execution queue. If not provided, a task is created but not launched | Yes | +| `--repo` | URL of remote repository. Example: `--repo https://github.com/allegroai/clearml.git` | Yes | +| `--requirements` | Specify `requirements.txt` file to install when setting the session. By default, the` requirements.txt` from the repository will be used | Yes | +| `--script` | Entry point script for the remote execution. When used with `--repo`, input the script's relative path inside the repository. For example: `--script source/train.py`. When used with `--folder`, it supports a direct path to a file inside the local repository itself, for example: `--script ~/project/source/train.py` | No | +| `--skip-task-init` | If set, `Task.init()` call is not added to the entry point, and is assumed to be called within the script | Yes | +| `--task-type` | Set the task type. Optional values: training, testing, inference, data_processing, application, monitor, controller, optimizer, service, qc, custom | Yes | +| `--version` | Display the `clearml-task` utility version | Yes | diff --git a/docs/clearml_agent.md b/docs/clearml_agent.md index fe5e0311..03bdaa9a 100644 --- a/docs/clearml_agent.md +++ b/docs/clearml_agent.md @@ -94,7 +94,7 @@ it can't do that when running from a virtual environment. Please create new clearml credentials through the settings page in your `clearml-server` web app, or create a free account at https://app.clear.ml/settings/webapp-configuration - In the settings > workspace page, press "Create new credentials", then press "Copy to clipboard". + In the settings > workspace page, press "Create new credentials", then press "Copy to clipboard". Paste copied configuration here: ``` diff --git a/docs/hyperdatasets/task.md b/docs/hyperdatasets/task.md index 123bb5bb..a90cd2c2 100644 --- a/docs/hyperdatasets/task.md +++ b/docs/hyperdatasets/task.md @@ -2,7 +2,7 @@ title: Tasks --- -Hyper-Datasets extend the ClearML [**Task**](../fundamentals/task.md) with [Dataviews](dataviews.md) +Hyper-Datasets extend the ClearML [**Task**](../fundamentals/task.md) with [Dataviews](dataviews.md). ## Usage diff --git a/docs/user_management/user_groups.md b/docs/user_management/user_groups.md index 1152ad2c..4fafcadf 100644 --- a/docs/user_management/user_groups.md +++ b/docs/user_management/user_groups.md @@ -11,4 +11,4 @@ configuration and access control administration by allowing administrators to as rules at the group level rather than for each user and/or [service account](../webapp/webapp_profile.md#service-accounts) individually. Administrators have the flexibility to create user groups, and add or remove members as needed. -For more information see [User Groups](../webapp/webapp_profile.md#user-groups) \ No newline at end of file +For more information see [User Groups](../webapp/webapp_profile.md#user-groups). \ No newline at end of file diff --git a/docs/webapp/applications/apps_gradio.md b/docs/webapp/applications/apps_gradio.md index 331d3503..b1e2e539 100644 --- a/docs/webapp/applications/apps_gradio.md +++ b/docs/webapp/applications/apps_gradio.md @@ -33,7 +33,7 @@ Once you start a Gradio launcher instance, you can view the following informatio to copy link * Gradio Git repo - Repository that holds the Gradio app script * Live preview of the Gradio app -* Console Log The console log shows the launcher instance's activity, including server setup progress, server status +* Console Log - The console log shows the launcher instance's activity, including server setup progress, server status changes diff --git a/docs/webapp/applications/apps_streamlit.md b/docs/webapp/applications/apps_streamlit.md index 9609046c..9807599e 100644 --- a/docs/webapp/applications/apps_streamlit.md +++ b/docs/webapp/applications/apps_streamlit.md @@ -34,7 +34,7 @@ Once you start a Streamlit launcher instance, you can view the following informa to copy link * Streamlit Git repo - Repository that holds the Streamlit app script * Live preview of the Streamlit app -* Console Log The console log shows the launcher instance's activity, including server setup progress, server status +* Console Log - The console log shows the launcher instance's activity, including server setup progress, server status changes ## Streamlit Launcher Instance Configuration