Small edits (#917)

This commit is contained in:
pollfly 2024-09-04 12:07:22 +03:00 committed by GitHub
parent d5f94713aa
commit 19a4149f73
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 32 additions and 30 deletions

View File

@ -30,7 +30,7 @@ Contributions come in many forms:
* Reporting [issues](https://github.com/allegroai/clearml/issues) you've come upon
* Participating in issue discussions in the [issue tracker](https://github.com/allegroai/clearml/issues) and the
[ClearML community slack space](https://joinslack.clear.ml)
[ClearML Community Slack space](https://joinslack.clear.ml)
* Suggesting new features or enhancements
* Implementing new features or fixing outstanding issues
@ -86,11 +86,13 @@ Enhancement suggestions are tracked as GitHub issues. After you determine which
Before you submit a new PR:
* Verify that the work you plan to merge addresses an existing [issue](https://github.com/allegroai/clearml/issues) (if not, open a new one)
* Check related discussions in the [ClearML slack community](https://joinslack.clear.ml)
* Check related discussions in the [ClearML Slack community](https://joinslack.clear.ml)
(or start your own discussion on the ``#clearml-dev`` channel)
* Make sure your code conforms to the ClearML coding standards by running:
flake8 --max-line-length=120 --statistics --show-source --extend-ignore=E501 ./clearml*
```
flake8 --max-line-length=120 --statistics --show-source --extend-ignore=E501 ./clearml*
```
In your PR include:

View File

@ -420,7 +420,7 @@ monitor. It's the ClearML monitor. It's essentially an object that you can imple
allows you to take a look into the depths of the ClearML ecosystem, what happens there? So it can give you an idea of
when tasks failed, when tasks succeeded, all of the types of events that ClearML can generate for you. So one of the
things you can do with it, and this is part of the example, it's also in the example repository, is create a Slack bot
for it. So essentially we've just used a bunch of slack APIs around this monitor, which is just a Slack monitor that we
for it. So essentially we've just used a bunch of Slack APIs around this monitor, which is just a Slack monitor that we
created ourselves and that will essentially just give you a message whenever a task succeeds, fails, whatever you want
to do. So in this case, it's fully equipped. We added a lot of arguments there so that you can just use it as a
command line tool, but you can create your own script based on your own requirements. Now what it will do is, let me

View File

@ -166,7 +166,7 @@ Filter by metadata using Lucene queries.
Filter by sources using Lucene queries.
* Add a source rule to filter for sources URIs with wildcards.
* Add a source rule to filter for source URIs with wildcards.
![Filter by source](../../img/hyperdatasets/frame_filtering_10.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 177 KiB

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 25 KiB

View File

@ -82,12 +82,12 @@ For example:
```
sdk {
aws {
s3 {
# default, used for any bucket not specified below
key: ${AWS_ACCESS_KEY_ID}
secret: ${AWS_SECRET_ACCESS_KEY}
region: ${AWS_DEFAULT_REGION}
}
s3 {
# default, used for any bucket not specified below
key: ${AWS_ACCESS_KEY_ID}
secret: ${AWS_SECRET_ACCESS_KEY}
region: ${AWS_DEFAULT_REGION}
}
}
}
```
@ -99,24 +99,24 @@ cloud-based or locally deployed storage services. For non-AWS endpoints, use a c
```
sdk {
aws {
s3 {
# default, used for any bucket not specified below
key: ""
secret: ""
region: ""
s3 {
# default, used for any bucket not specified below
key: ""
secret: ""
region: ""
credentials: [
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: "my-minio-host:9000"
key: ""
secret: ""
multipart: false
secure: false
verify: true # OR "/path/to/ca/bundle.crt" OR "https://url/of/ca/bundle.crt" OR false to not verify
}
]
}
credentials: [
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: "my-minio-host:9000"
key: ""
secret: ""
multipart: false
secure: false
verify: true # OR "/path/to/ca/bundle.crt" OR "https://url/of/ca/bundle.crt" OR false to not verify
}
]
}
}
}
```
@ -213,7 +213,7 @@ sdk {
}
```
GCP's storage access parameters can be specified by referencing the standard environment variables if already defined.
GCP storage access parameters can be specified by referencing the standard environment variables if already defined.
```
sdk {

View File

@ -68,7 +68,7 @@ section, like required packages and docker image)
* The step input arguments are unchanged, including step arguments and parameters (anything logged to the task's [Configuration](../webapp/webapp_exp_track_visual.md#configuration)
section)
By default, pipeline steps are not cached. Enable caching when creating a pipeline step (for example, see [@PipelineDecorator.component](pipelines_sdk_function_decorators.md#pipelinedecoratorcomponent)).
By default, pipeline steps are not cached. Enable caching when creating a pipeline step (for example, see [`@PipelineDecorator.component`](pipelines_sdk_function_decorators.md#pipelinedecoratorcomponent)).
When a step is cached, the step code is hashed, alongside the step's parameters (as passed in runtime), into a single
representing hash string. The pipeline first checks if a cached step exists in the system (archived Tasks will not be used