Small edits ()

This commit is contained in:
pollfly 2024-07-01 10:07:19 +03:00 committed by GitHub
parent f4457456dd
commit d7a713d0be
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
16 changed files with 258 additions and 220 deletions

View File

@ -63,7 +63,9 @@ Use the following JSON format for each parameter:
}
```
The following are the parameter type options and their corresponding fields:
- `LogUniformParameterRange`
- `"min_value": float` - The minimum exponent sample to use for logarithmic uniform random sampling
- `"max_value": float` - The maximum exponent sample to use for logarithmic uniform random sampling
- `"base": Optional[float]` - The base used to raise the sampled exponent. Default: `10`

View File

@ -102,7 +102,7 @@ hyperparameters. Passing `alias=<dataset_alias_string>` stores the dataset's ID
`dataset_alias_string` parameter in the experiment's **CONFIGURATION > HYPERPARAMETERS > Datasets** section. This way
you can easily track which dataset the task is using.
[`Dataset.get_local_copy`](../../references/sdk/dataset.md#get_local_copy) returns a path to the cached,
[`Dataset.get_local_copy()`](../../references/sdk/dataset.md#get_local_copy) returns a path to the cached,
downloaded dataset. Then the dataset path is input to PyTorch's `datasets` object.
The script then trains a neural network to classify images using the dataset created above.

View File

@ -53,29 +53,28 @@ Modify the data folder:
1. Add a file to the sample_data folder.<br/>
Run `echo "data data data" > data_samples/new_data.txt` (this will create the file `new_data.txt` and put it in the `data_samples` folder)
1. Repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder.
Repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder.
```bash
clearml-data sync --project datasets --name second_ds --parents a1ddc8b0711b4178828f6c6e6e994b7c --folder data_samples
```
```bash
clearml-data sync --project datasets --name second_ds --parents a1ddc8b0711b4178828f6c6e6e994b7c --folder data_samples
```
Expected response:
```
clearml-data - Dataset Management & Versioning CLI
Creating a new dataset:
New dataset created id=0992dd6bae6144388e0f2ef131d9724a
Syncing dataset id 0992dd6bae6144388e0f2ef131d9724a to local folder data_samples
Generating SHA2 hash for 6 files
Hash generation completed
Sync completed: 0 files removed, 2 added / modified
Finalizing dataset
Pending uploads, starting dataset upload to https://files.community.clear.ml
Uploading compressed dataset changes (2 files, total 742 bytes) to https://files.community.clear.ml
Upload completed (742 bytes)
2021-05-04 10:05:42,353 - clearml.Task - INFO - Waiting to finish uploads
2021-05-04 10:05:43,106 - clearml.Task - INFO - Finished uploading
Dataset closed and finalized
```
Expected response:
```
clearml-data - Dataset Management & Versioning CLI
Creating a new dataset:
New dataset created id=0992dd6bae6144388e0f2ef131d9724a
Syncing dataset id 0992dd6bae6144388e0f2ef131d9724a to local folder data_samples
Generating SHA2 hash for 6 files
Hash generation completed
Sync completed: 0 files removed, 2 added / modified
Finalizing dataset
Pending uploads, starting dataset upload to https://files.community.clear.ml
Uploading compressed dataset changes (2 files, total 742 bytes) to https://files.community.clear.ml
Upload completed (742 bytes)
2021-05-04 10:05:42,353 - clearml.Task - INFO - Waiting to finish uploads
2021-05-04 10:05:43,106 - clearml.Task - INFO - Finished uploading
Dataset closed and finalized
```
See that 2 files were added or modified, just as expected!
See that 2 files were added or modified, just as expected!

View File

@ -107,12 +107,13 @@ Using ClearML Data, you can create child datasets that inherit the content of ot
```bash
clearml-data create --project datasets --name HelloDataset-improved --parents 24d05040f3e14fbfbed8edb1bf08a88c
```
:::note
You'll need to input the Dataset ID you received when created the dataset above
:::
:::note
You'll need to input the Dataset ID you received when created the dataset above
:::
1. Add a new file.
* Create a new file: `echo "data data data" > new_data.txt`
* Now add the file to the dataset:
```bash

View File

@ -46,4 +46,4 @@ title: Windows
docker-compose -f c:\opt\clearml\docker-compose-win10.yml up -d
```
If issues arise during your upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq.md#common-docker-upgrade-errors).
If issues arise during your upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq.md#common-docker-upgrade-errors)

View File

@ -117,12 +117,15 @@ output to the console, when a Python experiment script is run.
For example, when a new ClearML Python Package version is available, the notification is:
CLEARML new package available: UPGRADE to vX.Y.Z is recommended!
```
CLEARML new package available: UPGRADE to vX.Y.Z is recommended!
```
When a new ClearML Server version is available, the notification is:
CLEARML-SERVER new version available: upgrade to vX.Y is recommended!
```
CLEARML-SERVER new version available: upgrade to vX.Y is recommended!
```
<br/>
@ -183,8 +186,7 @@ For more information about `Task` class methods, see the [Task Class](fundamenta
#### Can I store the model configuration file as well? <a id="store-model-configuration"></a>
Yes! Use [`Task.connect_configuration()`](references/sdk/task.md#connect_configuration)
method:
Yes! Use [`Task.connect_configuration()`](references/sdk/task.md#connect_configuration):
```python
Task.current_task().connect_configuration("a very long text with the configuration file's content")
@ -240,6 +242,7 @@ To replace the URL of each model, execute the following commands:
```
1. Create the following script inside the Docker shell (as well as the URL protocol if you aren't using `s3`):
```bash
cat <<EOT >> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
@ -248,11 +251,13 @@ To replace the URL of each model, execute the following commands:
EOT
```
Make sure to replace `<old-bucket-name>` and `<new-bucket-name>`.
1. Run the script against the backend DB:
```bash
mongo backend script.js
```
<br/>
#### Models are not accessible from the UI after I moved them (different bucket / server). How do I fix this? <a id="relocate_models"></a>
@ -342,7 +347,9 @@ ClearML monitors your Python process. When the process exits properly, ClearML c
This issue was resolved in Trains v0.9.2. Upgrade to ClearML by executing the following command:
pip install -U clearml
```
pip install -U clearml
```
<a id="ssl-connection-error"></a>
@ -352,7 +359,7 @@ This issue was resolved in Trains v0.9.2. Upgrade to ClearML by executing the fo
Your firewall may be preventing the connection. Try one of the following solutions:
* Direct python "requests" to use the enterprise certificate file by setting the OS environment variables CURL_CA_BUNDLE or REQUESTS_CA_BUNDLE. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
* Direct python "requests" to use the enterprise certificate file by setting the OS environment variables `CURL_CA_BUNDLE` or `REQUESTS_CA_BUNDLE`. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
* Disable certificate verification
:::warning
@ -763,21 +770,27 @@ Yes! You can run ClearML in Jupyter Notebooks using either of the following:
**Option 1: Install ClearML on your Jupyter host machine** <a id="opt1"></a>
1. Connect to your Jupyter host machine.
1. Install the ClearML Python Package.
1. Install the ClearML Python Package:
```
pip install clearml
```
1. Run the ClearML setup wizard.
1. Run the ClearML setup wizard:
```
clearml-init
```
1. In your Jupyter Notebook, you can now use ClearML.
**Option 2: Install ClearML in your Jupyter Notebook** <a id="opt2"></a>
1. Install the ClearML Python Package.
1. Install the ClearML Python Package:
```
pip install clearml
```
1. Get ClearML credentials. Open the ClearML Web UI in a browser. On the **SETTINGS > WORKSPACE** page, click **Create new credentials**.
The **JUPYTER NOTEBOOK** tab shows the commands required to configure your notebook (a copy to clipboard action is available on hover)
@ -822,7 +835,9 @@ To override the default configuration file location, set the `CLEARML_CONFIG_FIL
For example:
export CLEARML_CONFIG_FILE="/home/user/myclearml.conf"
```
export CLEARML_CONFIG_FILE="/home/user/myclearml.conf"
```
<br/>
@ -830,9 +845,11 @@ For example:
To override your configuration file / defaults, set the following OS environment variables:
export CLEARML_API_ACCESS_KEY="key_here"
export CLEARML_API_SECRET_KEY="secret_here"
export CLEARML_API_HOST="http://localhost:8008"
```
export CLEARML_API_ACCESS_KEY="key_here"
export CLEARML_API_SECRET_KEY="secret_here"
export CLEARML_API_HOST="http://localhost:8008"
```
<br/>
@ -864,9 +881,11 @@ Set the OS environment variable `CLEARML_LOG_ENVIRONMENT` with the variables you
If you joined the ClearML Hosted Service and ran a script, but your experiment does not appear in Web UI, you may not have configured ClearML for the hosted service. Run the ClearML setup wizard. It will request your hosted service ClearML credentials and create the ClearML configuration you need.
pip install clearml
```
pip install clearml
clearml-init
clearml-init
```
## ClearML Server Deployment
@ -913,7 +932,9 @@ see [Deploying ClearML Server: Kubernetes using Helm](deploying_clearml/clearml_
If you are using SELinux, run the following command (see this [discussion](https://stackoverflow.com/a/24334000)):
chcon -Rt svirt_sandbox_file_t /opt/clearml
```
chcon -Rt svirt_sandbox_file_t /opt/clearml
```
## ClearML Server Configuration
@ -958,11 +979,15 @@ For example:
To resolve the Docker error:
`... The container name "/trains-???" is already in use by ...`
```
... The container name "/trains-???" is already in use by ...
```
try removing deprecated images:
$ docker rm -f $(docker ps -a -q)
```
$ docker rm -f $(docker ps -a -q)
```
<br/>
@ -1042,7 +1067,9 @@ Do the following:
1. Allow bypassing of your proxy server to `localhost`
using a system environment variable, for example:
```
NO_PROXY = localhost
```
1. If a ClearML configuration file (`clearml.conf`) exists, delete it.
1. Open a terminal session.

View File

@ -95,13 +95,16 @@ Now, let's execute some code in the remote session!
1. In the first cell of the notebook, clone the [ClearML repository](https://github.com/allegroai/clearml):
```
!git clone https://github.com/allegroai/clearml.git
```
1. In the second cell of the notebook, run this [script](https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py)
from the cloned repository:
```
%run clearml/examples/frameworks/keras/keras_tensorboard.py
```
Look in the script, and notice that it makes use of ClearML, Keras, and TensorFlow, but you don't need to install these
packages in Jupyter, because you specified them in the `--packages` flag of `clearml-session`.

View File

@ -56,22 +56,22 @@ myDataset_2 = DatasetVersion.create_new_dataset(
To raise a `ValueError` exception if the Dataset exists, specify the `raise_if_exists` parameters as `True`.
* With `Dataset.create`
```python
try:
* With `Dataset.create`:
```python
try:
myDataset = Dataset.create(dataset_name='myDataset One', raise_if_exists=True)
except ValueError:
except ValueError:
print('Dataset exists.')
```
```
* Or with `DatasetVersion.create_new_dataset`
* Or with `DatasetVersion.create_new_dataset`:
```python
try:
```python
try:
myDataset = DatasetVersion.create_new_dataset(dataset_name='myDataset Two', raise_if_exists=True)
except ValueError:
except ValueError:
print('Dataset exists.')
```
```
Additionally, create a Dataset with tags and a description.

View File

@ -324,7 +324,7 @@ myDatasetVersion.update_frames(frames)
### Deleting Frames
To delete a SingleFrame, use [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames).
To delete a SingleFrame, use [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames):
```python
frames = []

View File

@ -77,7 +77,7 @@ Integrate ClearML with the following steps:
)
```
1. Attach the ClearMLLogger object to helper handlers to log experiment outputs. Ignite supports the following helper handlers for ClearML:
1. Attach the `ClearMLLogger` object to helper handlers to log experiment outputs. Ignite supports the following helper handlers for ClearML:
* **ClearMLSaver** - Saves input snapshots as ClearML artifacts.
* **GradsHistHandler** and **WeightsHistHandler** - Logs the model's gradients and weights respectively as histograms.

View File

@ -320,11 +320,15 @@ To create block code, use one of the following options:
* Surround code with "fences"--three backticks (<code>```</code>):
~~~
```
from clearml import Task
t = Task.init(project_name='My project', task_name='Base')
```
~~~
Both of these options will be rendered as:
@ -338,11 +342,13 @@ t = Task.init(project_name='My project', task_name='Base')
To display syntax highlighting, specify the coding language after the first fence (e.g. <code>\```python</code>, <code>\```json</code>, <code>\```js</code>, etc.):
```python
from clearml import Task
~~~
```python
from clearml import Task
t = Task.init(project_name='My project', task_name='Base')
```
t = Task.init(project_name='My project', task_name='Base')
```
~~~
The rendered output should look like this: