mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-26 18:17:44 +00:00
Small edits (#595)
This commit is contained in:
@@ -17,7 +17,7 @@ The example script does the following:
|
||||
1. Builds a sequential model using a categorical cross entropy loss objective function.
|
||||
1. Specifies accuracy as the metric, and uses two callbacks: a TensorBoard callback and a model checkpoint callback.
|
||||
1. During script execution, creates an experiment named `Keras with TensorBoard example`, which is associated with the
|
||||
`examples` project (in script) or the `Colab notebooks` project (in Jupyter Notebook) .
|
||||
`examples` project (in script) or the `Colab notebooks` project (in Jupyter Notebook).
|
||||
|
||||
|
||||
## Scalars
|
||||
|
||||
@@ -205,7 +205,7 @@ The logs show the Task ID and accuracy for the best model in **CONSOLE**.
|
||||
|
||||

|
||||
|
||||
The link to the model details is in **ARTIFACTS** **>** **Output Model** .
|
||||
The link to the model details is in **ARTIFACTS** **>** **Output Model**.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -103,7 +103,7 @@ In `slack_alerts.py`, the class `SlackMonitor` inherits from the `Monitor` class
|
||||
* Builds the Slack message which includes the most recent output to the console (retrieved by calling [`Task.get_reported_console_output`](../../references/sdk/task.md#get_reported_console_output)),
|
||||
and the URL of the Task's output log in the ClearML Web UI (retrieved by calling [`Task.get_output_log_web_page`](../../references/sdk/task.md#get_output_log_web_page)).
|
||||
|
||||
The example provides the option to run locally or execute remotely by calling the [`Task.execute_remotely`](../../references/sdk/task.md#execute_remotely)
|
||||
You can run the example remotely by calling the [`Task.execute_remotely`](../../references/sdk/task.md#execute_remotely)
|
||||
method.
|
||||
|
||||
To interface to Slack, the example uses `slack_sdk.WebClient` and `slack_sdk.errors.SlackApiError`.
|
||||
|
||||
@@ -22,7 +22,7 @@ In the `examples/frameworks/pytorch` directory, run the experiment script:
|
||||
|
||||
Clone the experiment to create an editable copy for tuning.
|
||||
|
||||
1. In the **ClearML Web-App (UI)**, on the Projects page, click the `examples` project card.
|
||||
1. In the ClearML WebApp (UI), on the Projects page, click the `examples` project card.
|
||||
|
||||
1. In the experiments table, right-click the experiment `pytorch mnist train`.
|
||||
|
||||
@@ -82,7 +82,7 @@ Run the worker daemon on the local development machine.
|
||||
|
||||
Enqueue the tuned experiment.
|
||||
|
||||
1. In the **ClearML Web-App (UI)**, experiments table, right-click the experiment `Clone Of pytorch mnist train`.
|
||||
1. In the ClearML WebApp > experiments table, right-click the experiment `Clone Of pytorch mnist train`.
|
||||
|
||||
1. In the context menu, click **Enqueue**.
|
||||
|
||||
@@ -95,7 +95,7 @@ Enqueue the tuned experiment.
|
||||
## Step 6: Compare the Experiments
|
||||
|
||||
To compare the original and tuned experiments:
|
||||
1. In the **ClearML Web-App (UI)**, on the Projects page, click the `examples` project.
|
||||
1. In the ClearML WebApp (UI), on the Projects page, click the `examples` project.
|
||||
1. In the experiments table, select the checkboxes for the two experiments: `pytorch mnist train` and `Clone Of pytorch mnist train`.
|
||||
1. On the menu bar at the bottom of the experiments table, click **COMPARE**. The experiment comparison window appears.
|
||||
All differences appear with a different background color to highlight them.
|
||||
|
||||
Reference in New Issue
Block a user