diff --git a/docs/clearml_agent/clearml_agent_ref.md b/docs/clearml_agent/clearml_agent_ref.md index 2b3b47ea..17791e9c 100644 --- a/docs/clearml_agent/clearml_agent_ref.md +++ b/docs/clearml_agent/clearml_agent_ref.md @@ -32,6 +32,8 @@ clearml-agent build [-h] --id TASK_ID [--target TARGET] ### Parameters +
+ |Name | Description| Mandatory | |---|----|---| |`--id`| Build a worker environment for this Task ID.|Yes| @@ -49,6 +51,8 @@ clearml-agent build [-h] --id TASK_ID [--target TARGET] |`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|No| |`--target`| The target folder for the virtual environment and source code that will be used at launch.|No| +
+ ## config List your ClearML Agent configuration. @@ -80,6 +84,8 @@ clearml-agent daemon [-h] [--foreground] [--queue QUEUES [QUEUES ...]] [--order- ### Parameters +
+ |Name | Description| Mandatory | |---|----|---| |`--child-report-tags`| List of tags to send with the status reports from the worker that executes a task.|No| @@ -106,6 +112,8 @@ clearml-agent daemon [-h] [--foreground] [--queue QUEUES [QUEUES ...]] [--order- |`--uptime`| Specify uptime for clearml-agent in ` ` format. For example, use `17-20 TUE` to set Tuesday's uptime to 17-20.

NOTES:
  • This feature is available under the ClearML Enterprise plan
  • Make sure to configure only `--uptime` or `--downtime`, but not both.
|No| |`--use-owner-token`| Generate and use the task owner's token for the execution of the task.|No| +
+ ## execute Use the `execute` command to set an agent to build and execute specific tasks directly without listening to a queue. @@ -123,6 +131,8 @@ clearml-agent execute [-h] --id TASK_ID [--log-file LOG_FILE] [--disable-monitor ### Parameters +
+ |Name | Description| Mandatory | |---|----|---| |`--id`| The ID of the Task to build|Yes| @@ -141,6 +151,8 @@ clearml-agent execute [-h] --id TASK_ID [--log-file LOG_FILE] [--disable-monitor |`--require-queue`| If the specified task is not queued, the execution will fail (used for 3rd party scheduler integration, e.g. K8s, SLURM, etc.)|No| |`--standalone-mode`| Do not use any network connects, assume everything is pre-installed|No| +
+ ## list List information about all active workers. diff --git a/docs/webapp/applications/apps_llama_deployment.md b/docs/webapp/applications/apps_llama_deployment.md index e44fb2fd..1f965d1e 100644 --- a/docs/webapp/applications/apps_llama_deployment.md +++ b/docs/webapp/applications/apps_llama_deployment.md @@ -64,7 +64,7 @@ to open the app's configuration form. values from the file, which can be modified before launching the app instance * **Project name** - ClearML Project where your llama.cpp Model Deployment app instance will be stored * **Task name** - Name of [ClearML Task](../../fundamentals/task.md) for your llama.cpp Model Deployment app instance -* **Queue** - The [ClearML Queue](../../fundamentals/agents_and_queues.md#agent-and-queue-workflow) to which the +* **Queue** - The [ClearML Queue](../../fundamentals/agents_and_queues.md#what-is-a-queue) to which the llama.cpp Model Deployment app instance task will be enqueued (make sure an agent is assigned to it) * **Model** - A ClearML Model ID or a Hugging Face model. The model must be in GGUF format. If you are using a HuggingFace model, make sure to pass the path to the GGUF file. For example: `provider/repo/path/to/model.gguf`