mirror of
https://github.com/clearml/clearml-docs
synced 2025-03-20 12:08:28 +00:00
Merge branch 'main' of https://github.com/allegroai/clearml-docs
This commit is contained in:
commit
deba43312e
docs
@ -65,7 +65,7 @@ See the [HyperParameterOptimizer SDK reference page](../references/sdk/hpo_optim
|
|||||||
### Pipeline
|
### Pipeline
|
||||||
|
|
||||||
ClearML's `automation` module includes classes that support creating pipelines:
|
ClearML's `automation` module includes classes that support creating pipelines:
|
||||||
* [PipelineController](../pipelines/pipelines_sdk_tasks.md) - A pythonic interface for
|
* [PipelineController](../pipelines/pipelines_sdk_tasks.md) - A Pythonic interface for
|
||||||
defining and configuring a pipeline controller and its steps. The controller and steps can be functions in your
|
defining and configuring a pipeline controller and its steps. The controller and steps can be functions in your
|
||||||
python code, or ClearML [tasks](../fundamentals/task.md).
|
python code, or ClearML [tasks](../fundamentals/task.md).
|
||||||
* [PipelineDecorator](../pipelines/pipelines_sdk_function_decorators.md) - A set
|
* [PipelineDecorator](../pipelines/pipelines_sdk_function_decorators.md) - A set
|
||||||
|
@ -170,7 +170,7 @@ If the `secure.conf` file does not exist, create your own in ClearML Server's `/
|
|||||||
an alternate folder you configured), and input the modified configuration
|
an alternate folder you configured), and input the modified configuration
|
||||||
:::
|
:::
|
||||||
|
|
||||||
The default secret for the system's apiserver component can be overridden by setting the following environment variable:
|
You can override the default secret for the system's `apiserver` component by setting the following environment variable:
|
||||||
`CLEARML__SECURE__CREDENTIALS__APISERVER__USER_SECRET="my-new-secret"`
|
`CLEARML__SECURE__CREDENTIALS__APISERVER__USER_SECRET="my-new-secret"`
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
18
docs/faq.md
18
docs/faq.md
@ -734,27 +734,23 @@ To fix this, the registered URL of each debug image and/or artifact needs to be
|
|||||||
|
|
||||||
* For **artifacts**, you can do the following:
|
* For **artifacts**, you can do the following:
|
||||||
|
|
||||||
1. Open bash in the mongo DB docker container:
|
1. Run shell in the `apiserver` container:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo docker exec -it clearml-mongo /bin/bash
|
sudo docker exec -it clearml-apiserver /bin/bash
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Inside the docker shell, create the following script. Make sure to replace `<old-bucket-name>` and `<new-bucket-name>`,
|
1. Navigate to the `apiserver` folder:
|
||||||
as well as the URL protocol prefixes if you aren't using `s3`.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOT >> script.js
|
cd /opt/clearml/apiserver
|
||||||
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
|
|
||||||
e.uri = e.uri.replace("s3://<old-bucket-name>/","s3://<new-bucket-name>/");
|
|
||||||
db.model.save(e);});
|
|
||||||
EOT
|
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Run the script against the backend DB:
|
1. Run the `fix_mongo_urls.py` script for fixing the artifacts. Make sure to insert the old address and the new
|
||||||
|
address that will replace it:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mongo backend script.js
|
python3 fix_mongo_urls.py --host-source http://old_address_and_port --host-target http://new_address_and_port
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@ During early stages of model development, while code is still being modified hea
|
|||||||
- **Workstation with a GPU**, usually with a limited amount of memory for small batch-sizes. Use this workstation to train
|
- **Workstation with a GPU**, usually with a limited amount of memory for small batch-sizes. Use this workstation to train
|
||||||
the model and ensure that you choose a model that makes sense, and the training procedure works. Can be used to provide initial models for testing.
|
the model and ensure that you choose a model that makes sense, and the training procedure works. Can be used to provide initial models for testing.
|
||||||
|
|
||||||
The abovementioned setups might be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome!
|
These setups can be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome!
|
||||||
The goal of this phase is to get a code, dataset, and environment set up, so you can start digging to find the best model!
|
The goal of this phase is to get a code, dataset, and environment set up, so you can start digging to find the best model!
|
||||||
|
|
||||||
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [Getting Started](ds_first_steps.md)).
|
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [Getting Started](ds_first_steps.md)).
|
||||||
|
@ -6,7 +6,7 @@ title: First Steps
|
|||||||
## Install ClearML
|
## Install ClearML
|
||||||
|
|
||||||
|
|
||||||
First, [sign up for free](https://app.clear.ml)
|
First, [sign up for free](https://app.clear.ml).
|
||||||
|
|
||||||
Install the `clearml` python package:
|
Install the `clearml` python package:
|
||||||
```bash
|
```bash
|
||||||
|
@ -46,7 +46,7 @@ We can change the task’s name by clicking it here, and add a description or ge
|
|||||||
|
|
||||||
First of all, source code is captured. If you’re working in a git repository we’ll save your git information along with any uncommitted changes. If you’re running an unversioned script, `clearml` will save the script instead.
|
First of all, source code is captured. If you’re working in a git repository we’ll save your git information along with any uncommitted changes. If you’re running an unversioned script, `clearml` will save the script instead.
|
||||||
|
|
||||||
Together with the python packages your coded uses, this’ll allow you to recreate your experiment on any machine.
|
Together with the Python packages your code uses, this will allow you to recreate your experiment on any machine.
|
||||||
|
|
||||||
Similarly, all of the output the code produces will also be captured.
|
Similarly, all of the output the code produces will also be captured.
|
||||||
|
|
||||||
|
@ -58,7 +58,7 @@ to open the app's instance launch form.
|
|||||||
* **Base Docker Image** (optional) - Available when `Use docker mode` is selected: Default Docker image in which the
|
* **Base Docker Image** (optional) - Available when `Use docker mode` is selected: Default Docker image in which the
|
||||||
ClearML Agent will run. Provide an image stored in a Docker artifactory so instances can automatically fetch it
|
ClearML Agent will run. Provide an image stored in a Docker artifactory so instances can automatically fetch it
|
||||||
* **Compute Resources**
|
* **Compute Resources**
|
||||||
* Resource Name - Assign a name to the resource type. This name will appear in the Autoscaler dashboard
|
* Resource Name - Assign a name to the resource type. This name will appear in the autoscaler dashboard
|
||||||
* EC2 Instance Type - See [Instance Types](https://aws.amazon.com/ec2/instance-types) for full list of types
|
* EC2 Instance Type - See [Instance Types](https://aws.amazon.com/ec2/instance-types) for full list of types
|
||||||
* Run in CPU mode - Check box to run with CPU only
|
* Run in CPU mode - Check box to run with CPU only
|
||||||
* Use Spot Instance - Select to use a spot instance. Otherwise, a reserved instance is used.
|
* Use Spot Instance - Select to use a spot instance. Otherwise, a reserved instance is used.
|
||||||
@ -98,7 +98,7 @@ to open the app's instance launch form.
|
|||||||
instance. Read more [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
|
instance. Read more [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
|
||||||
* VPC Subnet ID - The subnet ID For the created instance. If more than one ID is provided, the instance will be started in the first available subnet. For more information, see [What is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
|
* VPC Subnet ID - The subnet ID For the created instance. If more than one ID is provided, the instance will be started in the first available subnet. For more information, see [What is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
|
||||||
* \+ Add Item - Define another resource type
|
* \+ Add Item - Define another resource type
|
||||||
* **IAM Instance Profile** (optional) - Set an IAM instance profile for all instances spun by the Autoscaler
|
* **IAM Instance Profile** (optional) - Set an IAM instance profile for all instances spun by the autoscaler
|
||||||
* Arn - Amazon Resource Name specifying the instance profile
|
* Arn - Amazon Resource Name specifying the instance profile
|
||||||
* Name - Name identifying the instance profile
|
* Name - Name identifying the instance profile
|
||||||
* **Autoscaler Instance Name** (optional) - Name for the Autoscaler instance. This will appear in the instance list
|
* **Autoscaler Instance Name** (optional) - Name for the Autoscaler instance. This will appear in the instance list
|
||||||
@ -129,7 +129,7 @@ The Configuration Vault is available under the ClearML Enterprise plan.
|
|||||||
|
|
||||||
You can utilize the [configuration vault](../settings/webapp_settings_profile.md#configuration-vault) to set the following:
|
You can utilize the [configuration vault](../settings/webapp_settings_profile.md#configuration-vault) to set the following:
|
||||||
* `aws_region`
|
* `aws_region`
|
||||||
* `aws_credentials_key_id` and `aws_secret_access_key` - AWS credentials for the Autoscaler
|
* `aws_credentials_key_id` and `aws_secret_access_key` - AWS credentials for the autoscaler
|
||||||
* `extra_vm_bash_script` - A bash script to execute after launching the EC2 instance. This script will be appended to
|
* `extra_vm_bash_script` - A bash script to execute after launching the EC2 instance. This script will be appended to
|
||||||
the one set in the `Init script` field of the instance launch form
|
the one set in the `Init script` field of the instance launch form
|
||||||
* `extra_clearml_conf` - ClearML configuration to use by the ClearML Agent when executing your experiments. This
|
* `extra_clearml_conf` - ClearML configuration to use by the ClearML Agent when executing your experiments. This
|
||||||
@ -202,7 +202,7 @@ auto_scaler.v1.aws {
|
|||||||
#### Configure Instances Spawned by the Autoscaler
|
#### Configure Instances Spawned by the Autoscaler
|
||||||
To configure instances spawned by the autoscaler, do any of the following:
|
To configure instances spawned by the autoscaler, do any of the following:
|
||||||
* Add the configuration in the `auto_scaler.v1.aws.extra_clearml_conf` field of the configuration vault
|
* Add the configuration in the `auto_scaler.v1.aws.extra_clearml_conf` field of the configuration vault
|
||||||
* Run the Autoscaler using a [ClearML Service Account](../settings/webapp_settings_users.md#service-accounts). Add the
|
* Run the autoscaler using a [ClearML Service Account](../settings/webapp_settings_users.md#service-accounts). Add the
|
||||||
configuration to the service account's configuration vault, and set the autoscaler to run under that account
|
configuration to the service account's configuration vault, and set the autoscaler to run under that account
|
||||||
in the `Run with Service Account` field
|
in the `Run with Service Account` field
|
||||||
* Admins can add the configuration to a [ClearML Administrator Vault](../settings/webapp_settings_admin_vaults.md)
|
* Admins can add the configuration to a [ClearML Administrator Vault](../settings/webapp_settings_admin_vaults.md)
|
||||||
|
@ -58,7 +58,7 @@ to open the app's instance launch form.
|
|||||||
* **Base Docker Image** (optional) - Available when `Use docker mode` is selected. Default Docker image in which the ClearML Agent will run. Provide an image stored in a
|
* **Base Docker Image** (optional) - Available when `Use docker mode` is selected. Default Docker image in which the ClearML Agent will run. Provide an image stored in a
|
||||||
Docker artifactory so VM instances can automatically fetch it
|
Docker artifactory so VM instances can automatically fetch it
|
||||||
* **Compute Resources**
|
* **Compute Resources**
|
||||||
* Resource Name - Assign a name to the resource type. This name will appear in the Autoscaler dashboard
|
* Resource Name - Assign a name to the resource type. This name will appear in the autoscaler dashboard
|
||||||
* GCP Machine Type - See list of [machine types](https://cloud.google.com/compute/docs/machine-types)
|
* GCP Machine Type - See list of [machine types](https://cloud.google.com/compute/docs/machine-types)
|
||||||
* Run in CPU mode - Select to have the autoscaler utilize only CPU VM instances
|
* Run in CPU mode - Select to have the autoscaler utilize only CPU VM instances
|
||||||
* GPU Type - See list of [supported GPUs by instance](https://cloud.google.com/compute/docs/gpus)
|
* GPU Type - See list of [supported GPUs by instance](https://cloud.google.com/compute/docs/gpus)
|
||||||
@ -106,7 +106,7 @@ to open the app's instance launch form.
|
|||||||
|
|
||||||
:::important Enterprise Feature
|
:::important Enterprise Feature
|
||||||
You can utilize the [configuration vault](../settings/webapp_settings_profile.md#configuration-vault) to configure GCP
|
You can utilize the [configuration vault](../settings/webapp_settings_profile.md#configuration-vault) to configure GCP
|
||||||
credentials for the Autoscaler in the following format:
|
credentials for the autoscaler in the following format:
|
||||||
|
|
||||||
```
|
```
|
||||||
auto_scaler.v1 {
|
auto_scaler.v1 {
|
||||||
|
Loading…
Reference in New Issue
Block a user