diff --git a/docs/apps/clearml_task.md b/docs/apps/clearml_task.md index 647b7540..0311aebd 100644 --- a/docs/apps/clearml_task.md +++ b/docs/apps/clearml_task.md @@ -45,7 +45,7 @@ To specify your code's branch and commit ID, pass the `--branch -- If unspecified, `clearml-task` will use the latest commit from the 'master' branch. :::note Github Default Branch -For Github repositories, it is recommended to explicitly specify your default branch (e.g. `--branch main`) to avoid +For GitHub repositories, it is recommended to explicitly specify your default branch (e.g. `--branch main`) to avoid errors in identifying the correct default branch. ::: diff --git a/docs/clearml_agent.md b/docs/clearml_agent.md index e82a301b..a76472af 100644 --- a/docs/clearml_agent.md +++ b/docs/clearml_agent.md @@ -462,7 +462,7 @@ Do not enqueue training or inference tasks into the services queue. They will pu ### Setting Server Credentials -Self hosted [ClearML Server](deploying_clearml/clearml_server.md) comes by default with a services queue. +Self-hosted [ClearML Server](deploying_clearml/clearml_server.md) comes by default with a services queue. By default, the server is open and does not require username and password, but it can be [password-protected](deploying_clearml/clearml_server_security.md#user-access-security). In case it is password-protected, the services agent will need to be configured with server credentials (associated with a user). diff --git a/docs/clearml_sdk/clearml_sdk.md b/docs/clearml_sdk/clearml_sdk.md index 44c4af11..d67d539d 100644 --- a/docs/clearml_sdk/clearml_sdk.md +++ b/docs/clearml_sdk/clearml_sdk.md @@ -114,5 +114,5 @@ and [usage example](https://github.com/allegroai/clearml/blob/master/examples/sc The `clearml` GitHub repository includes an [examples folder](https://github.com/allegroai/clearml/tree/master/examples) with example scripts demonstrating how to use the various functionalities of the ClearML SDK. -These examples are pre-loaded in the [ClearML Hosted Service](https://app.clear.ml), and can be viewed, cloned, +These examples are preloaded in the [ClearML Hosted Service](https://app.clear.ml), and can be viewed, cloned, and edited in the ClearML Web UI's `ClearML Examples` project. The examples are each explained in the [examples section](../guides/main.md). diff --git a/docs/clearml_sdk/task_sdk.md b/docs/clearml_sdk/task_sdk.md index 768bc977..fca9fe50 100644 --- a/docs/clearml_sdk/task_sdk.md +++ b/docs/clearml_sdk/task_sdk.md @@ -126,7 +126,7 @@ auto_connect_frameworks={'tensorboard': {'report_hparams': False}} Every `Task.init` call will create a new task for the current execution. In order to mitigate the clutter that a multitude of debugging tasks might create, a task will be reused if: * The last time it was executed (on this machine) was under 72 hours ago (configurable, see - `sdk.development.task_reuse_time_window_in_hours` in the [`sdk.development` section](../configs/clearml_conf.md#sdkdevelopment) of + [`sdk.development.task_reuse_time_window_in_hours`](../configs/clearml_conf.md#task_reuse) of the ClearML configuration reference) * The previous task execution did not have any artifacts / models @@ -463,7 +463,7 @@ class method and provide the new seed value, **before initializing the task**. You can disable the deterministic behavior entirely by passing `Task.set_random_seed(None)`. ## Artifacts -Artifacts are the output files created by a task. ClearML uploads and logs these products so they can later be easily +Artifacts are the output files created by a task. ClearML uploads and logs these products, so they can later be easily accessed, modified, and used. ### Logging Artifacts @@ -713,7 +713,7 @@ config_file_yaml = task.connect_configuration( ![Task configuration objects](../img/fundamentals_task_config_object.png) ### User Properties -A task’s user properties do not impact task execution so you can add / modify the properties at any stage. Add user +A task’s user properties do not impact task execution, so you can add / modify the properties at any stage. Add user properties to a task with the [Task.set_user_properties](../references/sdk/task.md#set_user_properties) method. ```python diff --git a/docs/clearml_serving/clearml_serving_tutorial.md b/docs/clearml_serving/clearml_serving_tutorial.md index 72200a4d..984dd301 100644 --- a/docs/clearml_serving/clearml_serving_tutorial.md +++ b/docs/clearml_serving/clearml_serving_tutorial.md @@ -127,7 +127,7 @@ deployment process, as a single API automatically deploy (or remove) a model fro ``` 1. Deploy the Inference container (if not already deployed) 1. Publish a new model the model repository in one of the following ways: - - Go to the "serving examples" project in the ClearML web UI, click on the Models Tab, search for "train sklearn model" right click and select "Publish" + - Go to the "serving examples" project in the ClearML web UI, click on the Models Tab, search for "train sklearn model" right-click and select "Publish" - Use the RestAPI (see [details](https://clear.ml/docs/latest/docs/references/api/models#post-modelspublish_many)) - Use Python interface: diff --git a/docs/community.md b/docs/community.md index 8bc7ac80..fffcd5c1 100644 --- a/docs/community.md +++ b/docs/community.md @@ -16,7 +16,7 @@ You can always find us at [clearml@allegro.ai](mailto:clearml@allegro.ai?subject Read the [ClearML Blog](https://clear.ml/blog/). -Subscribe to the **ClearML** [Youtube Channel](https://www.youtube.com/c/ClearML) and view the tutorials, presentations, and discussions. +Subscribe to the **ClearML** [YouTube Channel](https://www.youtube.com/c/ClearML) and view the tutorials, presentations, and discussions. Join us on Twitter [@clearmlapp](https://twitter.com/clearmlapp) for **ClearML** announcements and community discussions. diff --git a/docs/configs/clearml_conf.md b/docs/configs/clearml_conf.md index 39c94886..6efeb5fc 100644 --- a/docs/configs/clearml_conf.md +++ b/docs/configs/clearml_conf.md @@ -973,6 +973,8 @@ and limitations on bucket naming. --- + + **`sdk.development.task_reuse_time_window_in_hours`** (*float*) * For development mode, the number of hours after which an experiment with the same project name and experiment name is reused. @@ -1061,7 +1063,7 @@ and limitations on bucket naming. **`sdk.google.storage.credentials`** (*[dict]*) -* A list of dictionaries, with specific credentials per bucket and sub-directory +* A list of dictionaries, with specific credentials per bucket and subdirectory --- diff --git a/docs/deploying_clearml/clearml_config_for_clearml_server.md b/docs/deploying_clearml/clearml_config_for_clearml_server.md index 5bcb2e2c..4badbd17 100644 --- a/docs/deploying_clearml/clearml_config_for_clearml_server.md +++ b/docs/deploying_clearml/clearml_config_for_clearml_server.md @@ -32,7 +32,7 @@ by setting [configuration options](../configs/clearml_conf.md). CLEARML_CONFIG_FILE = MyOtherClearML.conf - For more information about running experiments inside Docker containers, see [ClearML Agent Execution](../clearml_agent.md#execution) + For more information about running experiments inside Docker containers, see [ClearML Agent Deployment](../clearml_agent.md#deployment) and [ClearML Agent Reference](../clearml_agent/clearml_agent_ref.md). diff --git a/docs/deploying_clearml/clearml_server_win.md b/docs/deploying_clearml/clearml_server_win.md index 67d75b40..64942c24 100644 --- a/docs/deploying_clearml/clearml_server_win.md +++ b/docs/deploying_clearml/clearml_server_win.md @@ -32,7 +32,7 @@ Deploying the server requires a minimum of 4 GB of memory, 8 GB is recommended. 1. Increase the memory allocation in Docker Desktop to `4GB`. - 1. In the Windows notification area (system tray), right click the Docker icon. + 1. In the Windows notification area (system tray), right-click the Docker icon. 1. Click **Settings** **>** **Advanced**, and then set the memory to at least `4096`. diff --git a/docs/faq.md b/docs/faq.md index ac6aa69c..5e9a7871 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -321,7 +321,7 @@ task = Task.init(project_name, task_name, Task.TaskTypes.testing) **Sometimes I see experiments as running when in fact they are not. What's going on?** -ClearML monitors your Python process. When the process exits properly, ClearML closes the experiment. When the process crashes and terminates abnormally, it sometimes misses the stop signal. In this case, you can safely right click the experiment in the Web-App and abort it. +ClearML monitors your Python process. When the process exits properly, ClearML closes the experiment. When the process crashes and terminates abnormally, it sometimes misses the stop signal. In this case, you can safely right-click the experiment in the Web-App and abort it.
diff --git a/docs/getting_started/architecture.md b/docs/getting_started/architecture.md index 685ebc9c..52647cd7 100644 --- a/docs/getting_started/architecture.md +++ b/docs/getting_started/architecture.md @@ -12,8 +12,8 @@ Solutions combined with the clearml-server control plane. ![clearml architecture](../img/clearml_architecture.png) -## Youtube Playlist +## YouTube Playlist -The first video in our Youtube Getting Started playlist covers these modules in more detail, feel free to check out the video below. +The first video in our YouTube Getting Started playlist covers these modules in more detail, feel free to check out the video below. [![Watch the video](https://img.youtube.com/vi/s3k9ntmQmD4/hqdefault.jpg)](https://www.youtube.com/watch?v=s3k9ntmQmD4&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=1) \ No newline at end of file diff --git a/docs/getting_started/video_tutorials/agent_remote_execution_and_automation.md b/docs/getting_started/video_tutorials/agent_remote_execution_and_automation.md index 8440e38b..7e4661f5 100644 --- a/docs/getting_started/video_tutorials/agent_remote_execution_and_automation.md +++ b/docs/getting_started/video_tutorials/agent_remote_execution_and_automation.md @@ -47,17 +47,17 @@ However, there’s also docker mode. In this case the agent will run every incom Now that our configuration is ready, we can start our agent in docker mode by running the command `clearml-agent daemon –docker` -After running the command, we can see it pop up in our workers table. Now the agent will start listening for tasks in the `default` queue and it’s ready to go! +After running the command, we can see it pop up in our workers table. Now the agent will start listening for tasks in the `default` queue, and it’s ready to go! -Let's give our workers something to do. Say you have a task that you already ran on your local machine and you tracked it using the 2 magic lines that we saw before. Just like in the last video, we can right click it and clone it, so it’s now in draft mode. We can easily change some of the hyperparameters on-the-fly and *enqueue* the task. +Let's give our workers something to do. Say you have a task that you already ran on your local machine, and you tracked it using the 2 magic lines that we saw before. Just like in the last video, we can right-click it and clone it, so it’s now in draft mode. We can easily change some of the hyperparameters on-the-fly and *enqueue* the task. The agent will immediately detect that we enqueued a task and start working on it. Like we saw before, it will spin up a docker container, install the required packages and dependencies and run the code. The task itself is reported to the experiment manager just like any other task, and you can browse its outputs like normal, albeit with the changed parameters we edited earlier during draft mode. -On the left we can see a button labeled “Workers and Queues”. Under the workers tab we can see that our worker is indeed busy with our task and we can see its resource utilization as well. If we click on the current experiment, we end up in our experiment view again. Now, imagine we see in the scalar output that our model isn’t training the way we want it to, we can abort the task here and the agent will start working on the next task in the queue. +On the left we can see a button labeled “Workers and Queues”. Under the workers tab we can see that our worker is indeed busy with our task, and we can see its resource utilization as well. If we click on the current experiment, we end up in our experiment view again. Now, imagine we see in the scalar output that our model isn’t training the way we want it to, we can abort the task here and the agent will start working on the next task in the queue. -Back to our workers overview. Over in the Queues tab, we get some extra information about which experiments are currently in the queue and we can even change their order by dragging them in the correct position like so. Finally, we have graphs of the overall waiting time and overall amount of enqueued tasks over time. +Back to our workers overview. Over in the Queues tab, we get some extra information about which experiments are currently in the queue, and we can even change their order by dragging them in the correct position like so. Finally, we have graphs of the overall waiting time and overall amount of enqueued tasks over time. Talking of which, let’s say your wait times are very long because all data scientists have collectively decided that now is a perfect time to train their models and your on-premise servers are at capacity. We have built-in autoscalers for AWS and GCP (in the works) which will automatically spin up new `clearml-agent` VMs when the queue wait time becomes too long. If you go for the premium tiers of ClearML, you’ll even get a really nice dashboard to go along with it. diff --git a/docs/getting_started/video_tutorials/core_component_overview.md b/docs/getting_started/video_tutorials/core_component_overview.md index 73ead144..c2f2a210 100644 --- a/docs/getting_started/video_tutorials/core_component_overview.md +++ b/docs/getting_started/video_tutorials/core_component_overview.md @@ -30,7 +30,7 @@ After running `pip install clearml` we can add 2 simple lines of python code to The pip package also includes `clearml-data`. It can help you keep track of your ever-changing datasets and provides an easy way to store, track and version control your data. It’s also an easy way to share your dataset with colleagues over multiple machines while keeping track of who has which version. ClearML Data can even keep track of your data’s ancestry, making sure you can always figure out where specific parts of your data came from. -Both the 2 magic lines and the data tool will send all of their information to a ClearML server. This server then keeps an overview of your experiment runs and data sets over time, so you can always go back to a previous experiment, see how it was created and even recreate it exactly. Keep track of your best models by creating leaderboards based on your own metrics and you can even directly compare multiple experiment runs, helping you to figure out the best way forward for your models. +Both the 2 magic lines and the data tool will send all of their information to a ClearML server. This server then keeps an overview of your experiment runs and data sets over time, so you can always go back to a previous experiment, see how it was created and even recreate it exactly. Keep track of your best models by creating leaderboards based on your own metrics, and you can even directly compare multiple experiment runs, helping you to figure out the best way forward for your models. To get started with a server right away, you can make use of the free tier. And when your needs grow, we’ve got you covered too! Just check out our website to find a tier that fits your organisation best. But, because we’re open source, you can also host your own completely for free. We have AWS images, Google Cloud images, you can run it on docker-compose locally or even, if you really hate yourself, run it on a self-hosted kubernetes cluster using our helm charts. @@ -40,7 +40,7 @@ The `clearml-agent` is a daemon that you can run on 1 or multiple machines and t Now that we have this remote execution capability, the possibilities are near endless. -For example, It’s easy to set up an agent on a either a CPU or a GPU machine, so you can easily run all of your experiments on any compute resource you have available. And if you spin up your agents in the cloud, they’ll even support auto scaling out of the box. +For example, It’s easy to set up an agent on either a CPU or a GPU machine, so you can easily run all of your experiments on any compute resource you have available. And if you spin up your agents in the cloud, they’ll even support auto scaling out of the box. You can set up multiple machines as agents to support large teams with their complex projects and easily configure a queuing system to get the most out of your available hardware. @@ -48,7 +48,7 @@ Talking about using multiple machines, say you have an experiment and want to op You can even use a Google Colab instance as a ClearML Agent to get free GPU power, just sayin! -As a final example of how you could use the agent's functionality, ClearML provides a `PipelineController`, which allows you to chain together tasks by plugging the output of one task as the input of another. Each of the tasks are of course run on your army of agents for full automation. +As a final example of how you could use the agent's functionality, ClearML provides a `PipelineController`, which allows you to chain together tasks by plugging the output of one task as the input of another. Each of the tasks is of course run on your army of agents for full automation. As you can see ClearML is a large toolbox, stuffed with the most useful components for both data scientists and MLOps engineers. We’re diving deeper into each component in the following videos if you need more details, but feel free to get started now at clear.ml. diff --git a/docs/getting_started/video_tutorials/experiment_manager_hands-on.md b/docs/getting_started/video_tutorials/experiment_manager_hands-on.md index 291a66c4..ae42f849 100644 --- a/docs/getting_started/video_tutorials/experiment_manager_hands-on.md +++ b/docs/getting_started/video_tutorials/experiment_manager_hands-on.md @@ -60,7 +60,7 @@ Next to automatic logging, it is super easy to manually add anything you want to Just take a look at our documentation for more info. -If you want to show colleagues or friends how well your models are performing, you can easily share a task by right clicking it and choosing share to make it accessible with a link. Anyone visiting that link will get the detail view in fullscreen mode and the task itself will get a tag showing that it’s now shared. +If you want to show colleagues or friends how well your models are performing, you can easily share a task by right-clicking it and choosing share to make it accessible with a link. Anyone visiting that link will get the detail view in fullscreen mode and the task itself will get a tag showing that it’s now shared. In many cases, we also want to compare multiple versions of our experiments directly, this is easily done by selecting the tasks you’re interested in and clicking on compare in the bottom ribbon. diff --git a/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_a_data_scientist.md b/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_a_data_scientist.md index 05258274..7375e68f 100644 --- a/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_a_data_scientist.md +++ b/docs/getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_a_data_scientist.md @@ -48,7 +48,7 @@ I've collapsed a lot of the functions here so that it's a lot easier to take a l I'm going through these files is the `Task.init` command and essentially this is what ClearML uses to keep track of every time you run this specific script. So you'll see it in `get_data.py`, you'll see it in `preprocessing.py`, and you'll see it in `training.py` as well. And so this line is all you need to get started. It will already start capturing -everything that you'll need and that the program produces like plots or hyper parameters, you name it. +everything that you'll need and that the program produces like plots or hyperparameters, you name it. So let's take a look in depth first at what `get_data.py` does for me. So getting data is very simple, but what I used to do is I would get the data from like a remote location, You download a zip file or whatever, and then you extract it @@ -69,8 +69,8 @@ don't change the name, you overwrite it. so that's all the thing of the past. No it to you later in the UI, we have a nice and clear overview of all of the different versions. I'll add some dataset statistics that's also something you can do and ClearML is just add some, for example, class -distribution or other kind of plots that could be interesting and then I'm actually building the ClearML dataset here. -Also, an an extra thing that is really really useful if you use ClearML datasets is you can actually share it as well. +distribution or other kind of plots that could be interesting, and then I'm actually building the ClearML dataset here. +Also, an an extra thing that is really, really useful if you use ClearML datasets is you can actually share it as well. So not only with colleagues and friends, for example. You can share the data with them, and they can add to the data, and always you will always have the latest version, you will always know what happened before that. @@ -199,7 +199,7 @@ learning something, it's doing something so that actually is very interesting. And then you have debug samples as well, which you can use to show actually whatever kind of media you need. So these are for example, the images that I generated that are the mel spectrograms so that the preprocessing outputs uh, and you can just show them here with the name of what the label was and what to predict it was. So I can just have a very quick -overview of how this is working and then I can actually even do it with audio samples as well. So I can for example here +overview of how this is working, and then I can actually even do it with audio samples as well. So I can for example here say this is labeled "dog", and it is predicted as "children playing". So then I can listen to it and get an idea on, is this correct? Is it not correct? In this case, obviously it's not correct, but then I can go further into the iterations and then hopefully it will get better and better over time. But this is a quick way that I can just validate that what @@ -253,7 +253,7 @@ also use these differences to then go back to the original code. Of course, hyperparameters. There weren't any differences. We didn't actually change any of the hyperparameters here, but if we did, that would also be highlighted in red in this section. So if we're going to look at the scalars, this is -where it gets really interesting because now the plots are overlaid on top of each other and you can change the color +where it gets really interesting because now the plots are overlaid on top of each other, and you can change the color if you don't if you don't like the color. I think green is a bit ugly. So let's take red for example. We can just change that here. And then we have a quick overview of two different compared experiments and then how their scalars did over time. And because they have the same X-axis the iterations, we can actually compare them immediately to each other, @@ -311,7 +311,7 @@ us the full range of experiments that we trained this way on the full dataset, a it got the most or the highest F1 score on the subset, we don't actually have the highest score on the full dataset yet. However, even though it is not the best model, it might be interesting to get a colleague or a friend to take a look at it and see what we could do better or just show off the new model that you made. So the last thing I want to show you is -that you can now easily click it, right click, and then go to share, and you can share it publicly. If you create a +that you can now easily click it, right-click, and then go to share, and you can share it publicly. If you create a link, you can send this link to your friend, colleague, whatever, and they will be able to see the complete details of the whole experiment, of everything you did, you can see the graphs, they can see the hyperparameters, and I can help you find the best ways forward for your own models. diff --git a/docs/getting_started/video_tutorials/hyperparameter_optimization.md b/docs/getting_started/video_tutorials/hyperparameter_optimization.md index 135f7f07..5c8dc2f8 100644 --- a/docs/getting_started/video_tutorials/hyperparameter_optimization.md +++ b/docs/getting_started/video_tutorials/hyperparameter_optimization.md @@ -22,9 +22,9 @@ keywords: [mlops, components, hyperparameter optimization, hyperparameter]
Hello and welcome to ClearML. In this video we’ll take a look at one cool way of using the agent other than rerunning a task remotely: hyperparameter optimization. -By now, we know that ClearML can easily capture our hyperparameters and scalars as part of the experiment tracking. We also know we can clone any task and change its hyperparameters so they’ll be injected into the original code at runtime. In the last video, we learnt how to make a remote machine execute this task automatically by using the agent. +By now, we know that ClearML can easily capture our hyperparameters and scalars as part of the experiment tracking. We also know we can clone any task and change its hyperparameters, so they’ll be injected into the original code at runtime. In the last video, we learnt how to make a remote machine execute this task automatically by using the agent. -Soooo… Can we just clone a task like a 100 times, inject different hyperparameters in every clone, run the clones on 10 agents and then sort the results based on a specific scalar? +Soooo… Can we just clone a task like 100 times, inject different hyperparameters in every clone, run the clones on 10 agents and then sort the results based on a specific scalar? Yeah, yeah we can, it's called hyperparameter optimization. And we can do all of this automatically too! No way you were going to clone and edit those 100 tasks yourself, right? diff --git a/docs/getting_started/video_tutorials/pipelines_from_tasks.md b/docs/getting_started/video_tutorials/pipelines_from_tasks.md index 4464ef30..17dbacdf 100644 --- a/docs/getting_started/video_tutorials/pipelines_from_tasks.md +++ b/docs/getting_started/video_tutorials/pipelines_from_tasks.md @@ -47,7 +47,7 @@ The structure of your pipeline will be derived from looking at this `parents` ar Now we do the same for the final step. However, remember the empty hyperparameters we saw before? We still have to overwrite these. We can use the `parameter_override` argument to do just that. -For example we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task. +For example, we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task. And that’s it! We now have our first pipeline! diff --git a/docs/getting_started/video_tutorials/the_clearml_autoscaler.md b/docs/getting_started/video_tutorials/the_clearml_autoscaler.md index c54cc446..2d36cd85 100644 --- a/docs/getting_started/video_tutorials/the_clearml_autoscaler.md +++ b/docs/getting_started/video_tutorials/the_clearml_autoscaler.md @@ -57,7 +57,7 @@ After filling in all these settings, let’s launch the autoscaler now, so we ca We immediately start in the autoscaler dashboard, and we can see the amount of machines that are running, the amount that are doing nothing, how many machines we have available per queue and all the autoscaler logs. Right now we have no machines running at all because our queues are empty. -So if we go to one of our projects, clone these tasks here, and then enqueue them in the CPU queue and clone this task here as well. We can edit the parameters like we saw before and even change which container it should be run in. We then enqueue it in the GPU queue and we should now see the autoscaler kicking into action. +So if we go to one of our projects, clone these tasks here, and then enqueue them in the CPU queue and clone this task here as well. We can edit the parameters like we saw before and even change which container it should be run in. We then enqueue it in the GPU queue, and we should now see the autoscaler kicking into action. The autoscaler has detected the tasks in the queue and has started booting up remote machines to process them. We can follow along with the process in our autoscaler dashboard. diff --git a/docs/guides/frameworks/pytorch/notebooks/audio/audio_classification_UrbanSound8K.md b/docs/guides/frameworks/pytorch/notebooks/audio/audio_classification_UrbanSound8K.md index eef101e3..e42580a3 100644 --- a/docs/guides/frameworks/pytorch/notebooks/audio/audio_classification_UrbanSound8K.md +++ b/docs/guides/frameworks/pytorch/notebooks/audio/audio_classification_UrbanSound8K.md @@ -31,7 +31,7 @@ By doubling clicking a thumbnail, you can view a spectrogram plot in the image v ## Hyperparameters ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task using -a call to the [Task.connect](../../../../../references/sdk/task.md#connect) method. +a call to the [`Task.connect`](../../../../../references/sdk/task.md#connect) method. configuration_dict = {'number_of_epochs': 10, 'batch_size': 4, 'dropout': 0.25, 'base_lr': 0.001} configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml diff --git a/docs/guides/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.md b/docs/guides/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.md index c4b034e2..fbd046ff 100644 --- a/docs/guides/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.md +++ b/docs/guides/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.md @@ -17,7 +17,7 @@ ClearML automatically logs the audio samples which the example reports by callin ### Audio Samples -You can play the audio samples by double clicking the audio thumbnail. +You can play the audio samples by double-clicking the audio thumbnail. ![image](../../../../../img/examples_audio_preprocessing_example_03.png) diff --git a/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md b/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md index 5df31757..7e269201 100644 --- a/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md +++ b/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md @@ -98,7 +98,7 @@ Since the arguments dictionary is connected to the Task, after the code runs onc to optimize a different experiment. ```python -# experiment template to optimize in the hyper-parameter optimization +# experiment template to optimize in the hyperparameter optimization args = { 'template_task_id': None, 'run_as_service': False, diff --git a/docs/guides/reporting/artifacts.md b/docs/guides/reporting/artifacts.md index ca6c055e..943214c8 100644 --- a/docs/guides/reporting/artifacts.md +++ b/docs/guides/reporting/artifacts.md @@ -18,7 +18,7 @@ Configure ClearML for uploading artifacts to any of the supported types of stora S3 buckets, Google Cloud Storage, and Azure Storage ([debug sample storage](../../references/sdk/logger.md#set_default_upload_destination) is different). Configure ClearML in any of the following ways: -* In the configuration file, set [default_output_uri](../../configs/clearml_conf.md#sdkdevelopment). +* In the configuration file, set [default_output_uri](../../configs/clearml_conf.md#config_default_output_uri). * In code, when [initializing a Task](../../references/sdk/task.md#taskinit), use the `output_uri` parameter. * In the **ClearML Web UI**, when [modifying an experiment](../../webapp/webapp_exp_tuning.md#output-destination). @@ -96,7 +96,7 @@ task.upload_artifact( ### Dictionaries ```python -# add and upload dictionary stored as JSON) +# add and upload dictionary stored as JSON task.upload_artifact('dictionary', df.to_dict()) ``` diff --git a/docs/guides/ui/tuning_exp.md b/docs/guides/ui/tuning_exp.md index c921e0ac..44a738b6 100644 --- a/docs/guides/ui/tuning_exp.md +++ b/docs/guides/ui/tuning_exp.md @@ -24,7 +24,7 @@ Clone the experiment to create an editable copy for tuning. 1. In the **ClearML Web-App (UI)**, on the Projects page, click the `examples` project card. -1. In the experiments table, right click the experiment `pytorch mnist train`. +1. In the experiments table, right-click the experiment `pytorch mnist train`. 1. In the context menu, click **Clone** **>** **CLONE**. The newly cloned experiment appears and its info panel slides open. @@ -82,7 +82,7 @@ Run the worker daemon on the local development machine. Enqueue the tuned experiment. -1. In the **ClearML Web-App (UI)**, experiments table, right click the experiment `Clone Of pytorch mnist train`. +1. In the **ClearML Web-App (UI)**, experiments table, right-click the experiment `Clone Of pytorch mnist train`. 1. In the context menu, click **Enqueue**. diff --git a/docs/hyperdatasets/webapp/webapp_datasets_versioning.md b/docs/hyperdatasets/webapp/webapp_datasets_versioning.md index 1a236c95..1aaaafbc 100644 --- a/docs/hyperdatasets/webapp/webapp_datasets_versioning.md +++ b/docs/hyperdatasets/webapp/webapp_datasets_versioning.md @@ -28,7 +28,7 @@ In tree view, parent versions that do not match the query where a child version ### Version Actions -Access dataset version actions, by right clicking a version, or through the menu button Dot menu (available on hover). +Access dataset version actions, by right-clicking a version, or through the menu button Dot menu (available on hover). * **Rename** - Change the version’s name * **Create New Version** - Creates a child version of a *Published* dataset version. The new version is created in a *draft* diff --git a/docs/hyperdatasets/webapp/webapp_dataviews.md b/docs/hyperdatasets/webapp/webapp_dataviews.md index 0026eb1c..e38f49fb 100644 --- a/docs/hyperdatasets/webapp/webapp_dataviews.md +++ b/docs/hyperdatasets/webapp/webapp_dataviews.md @@ -12,7 +12,7 @@ View the Dataviews table in table view Details view, using the buttons on the top left of the page. Use the table view for a comparative view of your Dataviews according to columns of interest. Use the details view to access a selected Dataview’s details, while keeping the Dataview list in view. -Details view can also be accessed by double clicking a specific Dataview in the table view to open its details view. +Details view can also be accessed by double-clicking a specific Dataview in the table view to open its details view. ![Dataviews table](../../img/hyperdatasets/webapp_dataviews_table.png) @@ -38,7 +38,7 @@ Save customized settings in a browser bookmark, and share the URL with teammates Customize the table using any of the following: * Dynamic column order - Drag a column title to a different position. -* Resize columns - Drag the column separator to change the width of that column. Double click the column separator for automatic fit. +* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for automatic fit. * Filter by user and / or status - When a filter is applied to a column, its filter icon will appear with a highlighted dot on its top right (Filter on ). To clear all active filters, click Clear filters @@ -46,7 +46,7 @@ Customize the table using any of the following: * Sort columns - By experiment name and / or elapsed time since creation. :::note -The following Dataviews-table customizations are saved on a **per project** basis: +The following Dataviews-table customizations are saved on a **per-project** basis: * Column order * Column width * Active sort order @@ -62,7 +62,7 @@ all the Dataviews in the project. The customizations of these two views are save The following table describes the actions that can be performed from the Dataviews table. Access these actions with the context menu in any of the following ways: -* In the Dataviews table, right click a Dataview, or hover over a Dataview and click Dot menu +* In the Dataviews table, right-click a Dataview, or hover over a Dataview and click Dot menu * In a Dataview info panel, click the menu button Bar menu | ClearML Action | Description | diff --git a/docs/release_notes/ver_0_12.md b/docs/release_notes/ver_0_12.md index 359cef4d..e64e0ab1 100644 --- a/docs/release_notes/ver_0_12.md +++ b/docs/release_notes/ver_0_12.md @@ -56,7 +56,7 @@ Do not reuse an experiment with artifacts. Minor bug fixes and improvements * Add resource monitoring. -* Fix Web UI compare plots ([Github Issue #55](https://github.com/allegroai/clearml/issues/55)). +* Fix Web UI compare plots ([GitHub Issue #55](https://github.com/allegroai/clearml/issues/55)). * Improve server upgrade checks/messages. ### Trains Agent diff --git a/docs/release_notes/ver_1_1.md b/docs/release_notes/ver_1_1.md index 25d26945..81b7e5bc 100644 --- a/docs/release_notes/ver_1_1.md +++ b/docs/release_notes/ver_1_1.md @@ -31,7 +31,7 @@ This release is not backwards compatible - see notes below on upgrading **Features** -- Add `Task.force_store_standalone_script()` to force storing standalone script instead of a Git repository reference [ClearML Github issue #340](https://github.com/allegroai/clearml/issues/340) +- Add `Task.force_store_standalone_script()` to force storing standalone script instead of a Git repository reference [ClearML GitHub issue #340](https://github.com/allegroai/clearml/issues/340) - Add `Logger.set_default_debug_sample_history()` and `Logger.get_default_debug_sample_history()` to allow controlling maximum debug samples programmatically - Add populate now stores function arg types as part of the hyperparemeters @@ -40,8 +40,8 @@ This release is not backwards compatible - see notes below on upgrading **Bug Fixes** -- Fix and upgrade the SlackMonitor [ClearML Github issue #533](https://github.com/allegroai/clearml/issues/533) -- Fix network issues causing Task to stop on status change when no status change has occurred [ClearML Github issue #535](https://github.com/allegroai/clearml/issues/535) +- Fix and upgrade the SlackMonitor [ClearML GitHub issue #533](https://github.com/allegroai/clearml/issues/533) +- Fix network issues causing Task to stop on status change when no status change has occurred [ClearML GitHub issue #535](https://github.com/allegroai/clearml/issues/535) - Fix Pipeline controller function support for dict as input argument - Fix uploading the same metric/variant from multiple processes in threading mode should create a unique file per process (since global counter is not passed between the subprocesses) - Fix resource monitoring should only run in the main process when using threaded logging mode diff --git a/docs/release_notes/ver_1_5.md b/docs/release_notes/ver_1_5.md index abc66c0c..000b5b64 100644 --- a/docs/release_notes/ver_1_5.md +++ b/docs/release_notes/ver_1_5.md @@ -48,7 +48,7 @@ for user/password when cloning/fetching repositories) ### ClearML SDK 1.5.0 **New Features and Improvements** -* Add support for single value metric reporting ClearML GitHub issue [ClearML Github issue #400](https://github.com/allegroai/clearml/issues/400) +* Add support for single value metric reporting ClearML GitHub issue [ClearML GitHub issue #400](https://github.com/allegroai/clearml/issues/400) * Add support for specifying parameter sections in `PipelineDecorator` [ClearML GitHub issue #629](https://github.com/allegroai/clearml/issues/629) * Add support for parallel uploads and downloads (upload / download and zip / unzip of artifacts) * Add support for specifying execution details (repository, branch, commit, packages, image) in `PipelineDecorator` diff --git a/docs/webapp/applications/apps_overview.md b/docs/webapp/applications/apps_overview.md index e4749a3d..b1cb8256 100644 --- a/docs/webapp/applications/apps_overview.md +++ b/docs/webapp/applications/apps_overview.md @@ -47,7 +47,7 @@ The prefilled configuration wizard can be edited before launching the new app in ::: ## App Instance Actions -Access app instance actions, by right clicking an instance, or through the menu button Dot menu (available on hover). +Access app instance actions, by right-clicking an instance, or through the menu button Dot menu (available on hover). ![App context menu](../../img/app_context_menu.png) diff --git a/docs/webapp/pipelines/webapp_pipeline_table.md b/docs/webapp/pipelines/webapp_pipeline_table.md index 85d06f90..d15e2921 100644 --- a/docs/webapp/pipelines/webapp_pipeline_table.md +++ b/docs/webapp/pipelines/webapp_pipeline_table.md @@ -10,7 +10,7 @@ View the runs table in table view Details view, using the buttons on the top left of the page. Use the table view for a comparative view of your runs according to columns of interest. Use the details view to access a selected run’s details, while keeping the pipeline runs list in view. -Details view can also be accessed by double clicking a specific pipeline run in the table view to open its details view. +Details view can also be accessed by double-clicking a specific pipeline run in the table view to open its details view. ![Pipeline runs table](../../img/webapp_pipeline_runs_table.png) @@ -39,7 +39,7 @@ Customize the table using any of the following: to view and select columns to show. Click **Metric** and **Hyper Parameter** to add the respective custom columns * [Filter columns](#filtering-columns) * Sort columns -* Resize columns - Drag the column separator to change the width of that column. Double click the column separator for +* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for automatic fit. Changes are persistent (cached in the browser) and represented in the URL, so customized settings can be saved in a @@ -70,7 +70,7 @@ To clear all active filters, click Dot menu +* In the pipeline runs table, right-click a run, or hover over a pipeline and click Dot menu * In a pipeline info panel, click the menu button Bar menu | Action | Description | States Valid for the Action | State Transition | diff --git a/docs/webapp/webapp_archiving.md b/docs/webapp/webapp_archiving.md index 02a92dab..32e547d9 100644 --- a/docs/webapp/webapp_archiving.md +++ b/docs/webapp/webapp_archiving.md @@ -16,7 +16,7 @@ When archiving an experiment: * Archive an experiment or model from either the: - * Experiments or models table - Right click the experiment or model **>** **Archive**. + * Experiments or models table - Right-click the experiment or model **>** **Archive**. * Info panel or full screen details view - Click Bars menu (menu) **>** **Archive**. * Archive multiple experiments or models from the: diff --git a/docs/webapp/webapp_exp_reproducing.md b/docs/webapp/webapp_exp_reproducing.md index 9e94d165..d6f7fef2 100644 --- a/docs/webapp/webapp_exp_reproducing.md +++ b/docs/webapp/webapp_exp_reproducing.md @@ -18,7 +18,7 @@ Experiments can also be modified and then executed remotely, see [Tuning Experim * On the Dashboard, click a recent experiment, project card, or **VIEW ALL** and then click a project card. * On the Projects page, click project card, or the **All projects** card. -1. Reproduce the experiment. In the experiments table, right click and then either: +1. Reproduce the experiment. In the experiments table, right-click and then either: * Clone (make an exact copy) diff --git a/docs/webapp/webapp_exp_sharing.md b/docs/webapp/webapp_exp_sharing.md index 13ebb524..d794f0fb 100644 --- a/docs/webapp/webapp_exp_sharing.md +++ b/docs/webapp/webapp_exp_sharing.md @@ -22,7 +22,7 @@ Share experiments from the experiments table, the info panel menu, and/or the fu 1. Click **Share** in one of these ways: - * The experiment table - Right click the experiment **>** **Share** + * The experiment table - Right-click the experiment **>** **Share** * The info panel or full screen details view - Click the experiment **>** Menu (menu) **>** **Share**. diff --git a/docs/webapp/webapp_exp_table.md b/docs/webapp/webapp_exp_table.md index b68651fc..55bde3fc 100644 --- a/docs/webapp/webapp_exp_table.md +++ b/docs/webapp/webapp_exp_table.md @@ -10,7 +10,7 @@ View the experiments table in table view Details view, using the buttons on the top left of the page. Use the table view for a comparative view of your experiments according to columns of interest. Use the details view to access a selected experiment’s details, while keeping the experiment list -in view. Details view can also be accessed by double clicking a specific experiment in the table view to open its details view. +in view. Details view can also be accessed by double-clicking a specific experiment in the table view to open its details view. :::info To assist in focusing on active experimentation, experiments and models can be archived, so they will not appear @@ -45,7 +45,7 @@ The experiments table default and customizable columns are described in the foll Customize the table using any of the following: * Dynamic column order - Drag a column title to a different position. -* Resize columns - Drag the column separator to change the width of that column. Double click the column separator for +* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for automatic fit. * Changing table columns * Show / hide columns - Click Setting Gear @@ -68,7 +68,7 @@ Changes are persistent (cached in the browser), and represented in the URL so cu bookmark and shared with other ClearML users to collaborate. :::note -The following experiments-table customizations are saved on a **per project** basis: +The following experiments-table customizations are saved on a **per-project** basis: * Columns order * Column width * Active sort order @@ -132,7 +132,7 @@ The following table describes the actions that can be done from the experiments that allow each operation. Access these actions with the context menu in any of the following ways: -* In the experiments table,right click an experiment or hover over an experiment and click Dot menu +* In the experiments table, right-click an experiment or hover over an experiment and click Dot menu * In an experiment info panel, click the menu button Bar menu | Action | Description | States Valid for the Action | State Transition | diff --git a/docs/webapp/webapp_model_table.md b/docs/webapp/webapp_model_table.md index f9e60873..1fc589b5 100644 --- a/docs/webapp/webapp_model_table.md +++ b/docs/webapp/webapp_model_table.md @@ -9,7 +9,7 @@ View the models table in table view Details view, using the buttons on the top left of the page. Use the table view for a comparative view of your models according to columns of interest. Use the details view to access a selected model’s details, while keeping the model list in view. -Details view can also be accessed by double clicking a specific model in the table view to open its details view. +Details view can also be accessed by double-clicking a specific model in the table view to open its details view. ![Models table](../img/webapp_models_01.png) @@ -39,7 +39,7 @@ can be saved in a browser bookmark and shared with other ClearML users to collab Customize the table using any of the following: * Dynamic column order - Drag a column title to a different position. -* Resize columns - Drag the column separator to change the width of that column. Double click the column separator for +* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for automatic fit. * Changing table columns * Show / hide columns - Click Setting Gear @@ -51,7 +51,7 @@ Customize the table using any of the following: * Sort columns - By metadata, ML framework, description, and last update elapsed time. :::note -The following models-table customizations are saved on a **per project** basis: +The following models-table customizations are saved on a **per-project** basis: * Columns order * Column width * Active sort order @@ -68,12 +68,12 @@ The following table describes the actions that can be done from the models table allow each feature. Model states are *Draft* (editable) and *Published* (read-only). Access these actions with the context menu in any of the following ways: -* In the models table, right click a model, or hover over a model and click Dot menu +* In the models table, right-click a model, or hover over a model and click Dot menu * In a model's info panel, click the menu button Bar menu | ClearML Action | Description | States Valid for the Action | |---|---|--| -| Details | View model details, which include general information, the model configuration, and label enumeration. Can also be accessed by double clicking a model in the models table | Any state | +| Details | View model details, which include general information, the model configuration, and label enumeration. Can also be accessed by double-clicking a model in the models table | Any state | | Publish | Publish a model to prevent changes to it. *Published* models are read-only. If a model is Published, its experiment also becomes Published (read-only). | *Draft* | | Archive | To more easily work with active models, move a model to the archive. See [Archiving](webapp_archiving.md). | Any state | | Restore | Action available in the archive. Restore a model to the active model table. | Any state | diff --git a/docs/webapp/webapp_model_viewing.md b/docs/webapp/webapp_model_viewing.md index c9c8e39e..5f9b5b2c 100644 --- a/docs/webapp/webapp_model_viewing.md +++ b/docs/webapp/webapp_model_viewing.md @@ -2,7 +2,7 @@ title: Model Details --- -In the models table, double click on a model to view and / or modify the following: +In the models table, double-click on a model to view and / or modify the following: * General model information * Model configuration * Model label enumeration diff --git a/docs/webapp/webapp_profile.md b/docs/webapp/webapp_profile.md index 4a0f1239..a4481737 100644 --- a/docs/webapp/webapp_profile.md +++ b/docs/webapp/webapp_profile.md @@ -144,7 +144,7 @@ file entries will be overridden by the vault values. Fill in values using any of ClearML supported configuration formats: HOCON / JSON / YAML. **To edit vault contents:** -1. Click **EDIT** or double click the vault box +1. Click **EDIT** or double-click the vault box 1. Insert / edit the configurations in the vault 1. Press **OK**