From 48e0a1d4532a1f8731580c32a82a4719c0ecc784 Mon Sep 17 00:00:00 2001
From: pollfly <75068813+pollfly@users.noreply.github.com>
Date: Sun, 16 Apr 2023 10:13:04 +0300
Subject: [PATCH] Small edits (#533)
---
docs/clearml_data/clearml_data_sdk.md | 4 ++--
.../data_man_simple.md | 2 +-
docs/clearml_sdk/task_sdk.md | 16 ++++++-------
docs/faq.md | 24 +++++++++----------
docs/fundamentals/hyperparameters.md | 2 +-
.../docker/extra_docker_shell_script.md | 2 +-
docs/guides/ide/integration_pycharm.md | 2 +-
.../examples_hyperparam_opt.md | 4 ++--
docs/guides/pipeline/pipeline_controller.md | 4 ++--
.../reporting/manual_matplotlib_reporting.md | 2 +-
docs/guides/services/aws_autoscaler.md | 4 ++--
docs/guides/set_offline.md | 4 ++--
docs/pipelines/pipelines_sdk_tasks.md | 2 +-
.../applications/apps_aws_autoscaler.md | 2 +-
.../applications/apps_gcp_autoscaler.md | 2 +-
docs/webapp/webapp_project_overview.md | 2 +-
16 files changed, 39 insertions(+), 39 deletions(-)
diff --git a/docs/clearml_data/clearml_data_sdk.md b/docs/clearml_data/clearml_data_sdk.md
index 82d49692..c7d8e8e4 100644
--- a/docs/clearml_data/clearml_data_sdk.md
+++ b/docs/clearml_data/clearml_data_sdk.md
@@ -8,7 +8,7 @@ See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced querya
:::
Datasets can be created, modified, and managed with ClearML Data's python interface. You can upload your dataset to any
-storage service of your choice (S3 / GS / Azure / Network Storage) by setting the dataset’s upload destination (see
+storage service of your choice (S3 / GS / Azure / Network Storage) by setting the dataset’s upload destination (see
[`output_url`](#uploading-files) parameter of `Dataset.upload` method). Once you have uploaded your dataset, you can access
it from any machine.
@@ -26,7 +26,7 @@ from clearml import Dataset
ClearML Data supports multiple ways to create datasets programmatically, which provides for a variety of use-cases:
* [`Dataset.create()`](#datasetcreate) - Create a new dataset. Parent datasets can be specified, from which the new dataset
will inherit its data
-* [`Dataset.squash()`](#datasetsquash) - Generate a new dataset from by squashing together a set of related datasets
+* [`Dataset.squash()`](#datasetsquash) - Generate a new dataset from by squashing together a set of related datasets
You can add metadata to your datasets using the `Dataset.set_metadata` method, and access the metadata using the
`Dataset.get_metadata` method. See [`set_metadata`](../references/sdk/dataset.md#set_metadata) and [`get_metadata`](../references/sdk/dataset.md#get_metadata).
diff --git a/docs/clearml_data/data_management_examples/data_man_simple.md b/docs/clearml_data/data_management_examples/data_man_simple.md
index 28ffc1cd..acad2637 100644
--- a/docs/clearml_data/data_management_examples/data_man_simple.md
+++ b/docs/clearml_data/data_management_examples/data_man_simple.md
@@ -51,7 +51,7 @@ to captures all files and sub-folders:
After creating a dataset, its ID doesn't need to be specified when running commands, such as `add`, `remove`, or `list`
:::
-3. Close the dataset - this command uploads the files. By default, the files are uploaded to the file server, but
+3. Close the dataset - this command uploads the files. By default, the files are uploaded to the file server, but
this can be configured with the `--storage` flag to any of ClearML's supported storage mediums (see [storage](../../integrations/storage.md)).
The command also finalizes the dataset, making it immutable and ready to be consumed.
diff --git a/docs/clearml_sdk/task_sdk.md b/docs/clearml_sdk/task_sdk.md
index 28cf123c..2b36b27b 100644
--- a/docs/clearml_sdk/task_sdk.md
+++ b/docs/clearml_sdk/task_sdk.md
@@ -63,7 +63,7 @@ After invoking `Task.init` in a script, ClearML starts its automagical logging,
* Command Line Parsing - ClearML captures any command line parameters passed when invoking code that uses standard python packages, including:
* [click](https://click.palletsprojects.com) (see code example [here](https://github.com/allegroai/clearml/blob/master/examples/frameworks/click/click_multi_cmd.py)).
* argparse (see argparse logging example [here](../guides/reporting/hyper_parameters.md).)
- * [Python Fire](https://github.com/google/python-fire) - see code examples [here](https://github.com/allegroai/clearml/tree/master/examples/frameworks/fire).
+ * [Python Fire](https://github.com/google/python-fire) - see code examples [here](https://github.com/allegroai/clearml/tree/master/examples/frameworks/fire).
* [LightningCLI](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli.html#lightning-cli) - see code example [here](https://github.com/allegroai/clearml/blob/master/examples/frameworks/jsonargparse/pytorch_lightning_cli.py).
* TensorFlow Definitions (`absl-py`)
* [Hydra](https://github.com/facebookresearch/hydra) - the OmegaConf which holds all the configuration files, as well as overridden values.
@@ -151,7 +151,7 @@ Pass one of the following in the `continue_last_task` parameter:
[Task Reuse](#task-reuse)).
* `True` - Continue the previously run Task.
* Task ID (string) - The ID of the task to be continued.
-* Initial iteration offset (Integer) - Specify the initial iteration offset. By default, the task will continue one
+* Initial iteration offset (Integer) - Specify the initial iteration offset. By default, the task will continue one
iteration after the last reported one. Pass `0`, to disable the automatic last iteration offset. To also specify a
task ID, use the `reuse_last_task_id` parameter .
@@ -337,7 +337,7 @@ cloned = Task.clone(
A newly cloned task has a [draft](../fundamentals/task.md#task-states) status, so it's modifiable.
-Once a task is modified, launch it by pushing it into an execution queue with the [Task.enqueue](../references/sdk/task.md#taskenqueue)
+Once a task is modified, launch it by pushing it into an execution queue with the [`Task.enqueue`](../references/sdk/task.md#taskenqueue)
class method. Then a [ClearML Agent](../clearml_agent.md) assigned to the queue will pull the task from the queue and execute
it.
@@ -359,7 +359,7 @@ A compelling workflow is:
1. Run code on a development machine for a few iterations, or just set up the environment.
1. Move the execution to a beefier remote machine for the actual training.
-Use the [Task.execute_remotely](../references/sdk/task.md#execute_remotely) method to implement this workflow. This method
+Use the [`Task.execute_remotely`](../references/sdk/task.md#execute_remotely) method to implement this workflow. This method
stops the current manual execution, and then re-runs it on a remote machine.
For example:
@@ -406,7 +406,7 @@ Function tasks must be created from within a regular task, created by calling `T
You can work with tasks in Offline Mode, in which all the data and logs that the Task captures are stored in a local
folder, which can later be uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md).
-Before initializing a Task, use the [Task.set_offline](../references/sdk/task.md#taskset_offline) class method and set
+Before initializing a Task, use the [`Task.set_offline`](../references/sdk/task.md#taskset_offline) class method and set
the `offline_mode` argument to `True`. The method returns the Task ID and a path to the session folder.
:::caution
@@ -433,7 +433,7 @@ Upload the execution data that the Task captured offline to the ClearML Server u
```
Pass the path to the zip folder containing the session with the `--import-offline-session` parameter
-* [Task.import_offline_session](../references/sdk/task.md#taskimport_offline_session) class method
+* [`Task.import_offline_session`](../references/sdk/task.md#taskimport_offline_session) class method
```python
from clearml import Task
Task.import_offline_session(session_folder_zip="path/to/session/.clearml/cache/offline/b786845decb14eecadf2be24affc7418.zip")
@@ -560,7 +560,7 @@ output_model = OutputModel(task=task, framework="PyTorch")
### Updating Models Manually
The snapshots of manually uploaded models aren't automatically captured. To update a task's model, use the
-[Task.update_output_model](../references/sdk/task.md#update_output_model) method:
+[`Task.update_output_model`](../references/sdk/task.md#update_output_model) method:
```python
task.update_output_model(model_path='path/to/model')
@@ -728,7 +728,7 @@ config_file_yaml = task.connect_configuration(
### User Properties
A task’s user properties do not impact task execution, so you can add / modify the properties at any stage. Add user
-properties to a task with the [Task.set_user_properties](../references/sdk/task.md#set_user_properties) method.
+properties to a task with the [`Task.set_user_properties`](../references/sdk/task.md#set_user_properties) method.
```python
task.set_user_properties(
diff --git a/docs/faq.md b/docs/faq.md
index 459eac82..da82b670 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -162,7 +162,7 @@ that metric column.
**Can I store more information on the models?**
-Yes! For example, you can use the [Task.set_model_label_enumeration](references/sdk/task.md#set_model_label_enumeration)
+Yes! For example, you can use the [`Task.set_model_label_enumeration`](references/sdk/task.md#set_model_label_enumeration)
method to store label enumeration:
```python
@@ -176,7 +176,7 @@ For more information about `Task` class methods, see the [Task Class](fundamenta
**Can I store the model configuration file as well?**
-Yes! Use the [Task.set_model_config](references/sdk/task.md#set_model_config)
+Yes! Use the [`Task.set_model_config`](references/sdk/task.md#set_model_config)
method:
```python
@@ -196,9 +196,9 @@ This will be improved in a future version.
**Can I log input and output models manually?**
-Yes! Use the [InputModel.import_model](references/sdk/model_inputmodel.md#inputmodelimport_model)
-and [Task.connect](references/sdk/task.md#connect) methods to manually connect an input model. Use the
-[OutputModel.update_weights](references/sdk/model_outputmodel.md#update_weights)
+Yes! Use the [`InputModel.import_model`](references/sdk/model_inputmodel.md#inputmodelimport_model)
+and [`Task.connect`](references/sdk/task.md#connect) methods to manually connect an input model. Use the
+[`OutputModel.update_weights`](references/sdk/model_outputmodel.md#update_weights)
method to manually connect a model weights file.
```python
@@ -288,7 +288,7 @@ Yes! ClearML provides multiple ways to configure your task and track your parame
In addition to argparse, ClearML also automatically captures and tracks command line parameters created using [click](https://click.palletsprojects.com/),
[Python Fire](https://github.com/google/python-fire), and/or [LightningCLI](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli.html#lightning-cli).
-ClearML also supports tracking code-level configuration dictionaries using the [Task.connect](references/sdk/task.md#connect) method.
+ClearML also supports tracking code-level configuration dictionaries using the [`Task.connect`](references/sdk/task.md#connect) method.
For example, the code below connects hyperparameters (`learning_rate`, `batch_size`, `display_step`,
`model_path`, `n_hidden_1`, and `n_hidden_2`) to a task:
@@ -309,7 +309,7 @@ See more task configuration options [here](fundamentals/hyperparameters.md).
**I noticed that all of my experiments appear as "Training" Are there other options?**
-Yes! When creating experiments and calling [Task.init](references/sdk/task.md#taskinit),
+Yes! When creating experiments and calling [`Task.init`](references/sdk/task.md#taskinit),
you can provide an experiment type. ClearML supports [multiple experiment types](fundamentals/task.md#task-types). For example:
```python
@@ -503,12 +503,12 @@ See [`Task.init`](references/sdk/task.md#taskinit).
Yes! You can use ClearML's Offline Mode, in which all the data and logs that a task captures from the code are stored in a
local folder.
-Before initializing a task, use the [Task.set_offline](references/sdk/task.md#taskset_offline)
+Before initializing a task, use the [`Task.set_offline`](references/sdk/task.md#taskset_offline)
class method and set the `offline_mode` argument to `True`. When executed, this returns the Task ID and a path to the
session folder. In order to upload to the ClearML Server the execution data that the Task captured offline, do one of the
following:
* Use the `import-offline-session ` option of the [clearml-task](apps/clearml_task.md) CLI
-* Use the [Task.import_offline_session](references/sdk/task.md#taskimport_offline_session) method.
+* Use the [`Task.import_offline_session`](references/sdk/task.md#taskimport_offline_session) method.
See [Storing Task Data Offline](guides/set_offline.md).
@@ -589,7 +589,7 @@ tutorial, which includes a list of methods for explicit reporting.
**How can I report more than one scatter 2D series on the same plot?**
-The [`Logger.report_scatter2d()`](references/sdk/logger.md#report_scatter2dtitle-series-scatter-iteration-xaxisnone-yaxisnone-labelsnone-modelines-commentnone-extra_layoutnone)
+The [`Logger.report_scatter2d`](references/sdk/logger.md#report_scatter2d)
method reports all series with the same `title` and `iteration` parameter values on the same plot.
For example, the following two scatter2D series are reported on the same plot, because both have a `title` of `example_scatter` and an `iteration` of `1`:
@@ -628,7 +628,7 @@ experiment info panel > EXECUTION tab.
**I read there is a feature for centralized model storage. How do I use it?**
-When calling [Task.init](references/sdk/task.md#taskinit),
+When calling [`Task.init`](references/sdk/task.md#taskinit),
providing the `output_uri` parameter lets you specify the location in which model checkpoints (snapshots) will be stored.
For example, to store model checkpoints (snapshots) in `/mnt/shared/folder`:
@@ -744,7 +744,7 @@ Yes! You can run ClearML in Jupyter Notebooks using either of the following:
pip install clearml
-1. Use the [Task.set_credentials](references/sdk/task.md#taskset_credentials)
+1. Use the [`Task.set_credentials`](references/sdk/task.md#taskset_credentials)
method to specify the host, port, access key and secret key (see step 1).
```python
# Set your credentials using the clearml apiserver URI and port, access_key, and secret_key.
diff --git a/docs/fundamentals/hyperparameters.md b/docs/fundamentals/hyperparameters.md
index 1fb8a077..c499f69f 100644
--- a/docs/fundamentals/hyperparameters.md
+++ b/docs/fundamentals/hyperparameters.md
@@ -24,7 +24,7 @@ the following types of parameters:
* Command line parsing - command line parameters passed when invoking code that uses standard python packages, including:
* [click](https://click.palletsprojects.com) - see code example [here](https://github.com/allegroai/clearml/blob/master/examples/frameworks/click/click_multi_cmd.py).
* [argparse](https://docs.python.org/3/library/argparse.html) - see code example [here](../guides/frameworks/pytorch/pytorch_tensorboardx.md).
- * [Python Fire](https://github.com/google/python-fire) - see code examples [here](https://github.com/allegroai/clearml/tree/master/examples/frameworks/fire).
+ * [Python Fire](https://github.com/google/python-fire) - see code examples [here](https://github.com/allegroai/clearml/tree/master/examples/frameworks/fire).
* [LightningCLI](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli.html#lightning-cli) - see code example [here](https://github.com/allegroai/clearml/blob/master/examples/frameworks/jsonargparse/pytorch_lightning_cli.py).
* TensorFlow Definitions (`absl-py`). See examples of ClearML's automatic logging of TF Defines:
* [TensorFlow MNIST](../guides/frameworks/tensorflow/tensorflow_mnist.md)
diff --git a/docs/guides/docker/extra_docker_shell_script.md b/docs/guides/docker/extra_docker_shell_script.md
index ace9c0dd..d77a042a 100644
--- a/docs/guides/docker/extra_docker_shell_script.md
+++ b/docs/guides/docker/extra_docker_shell_script.md
@@ -36,5 +36,5 @@ it is commented out, make sure to uncomment the line. We will use the example sc
and now it will execute the `extra_docker_shell_script` that was put in the configuration file. Then the code will be
executed in the updated docker container. If we look at the console output in the web UI, the third entry should start
with `Executing: ['docker', 'run', '-t', '--gpus...'`, and towards the end of the entry, where the downloaded packages are
- mentioned, we can see the additional shell-script `apt-get install -y bindfs`.
+ mentioned, we can see the additional shell-script `apt-get install -y bindfs`.
diff --git a/docs/guides/ide/integration_pycharm.md b/docs/guides/ide/integration_pycharm.md
index d949082a..d412439c 100644
--- a/docs/guides/ide/integration_pycharm.md
+++ b/docs/guides/ide/integration_pycharm.md
@@ -37,7 +37,7 @@ the settings in the ClearML configuration file.
1. Configure your ClearML server information:
1. API server (for example: ``http://localhost:8008``)
1. Web server (for example: ``http://localhost:8080``)
- 1. File server (for example: ``http://localhost:8081``)
+ 1. File server (for example: ``http://localhost:8081``)
1. Add ClearML user credentials key/secret.
diff --git a/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md b/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md
index 85856044..aeb9411d 100644
--- a/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md
+++ b/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md
@@ -157,7 +157,7 @@ Specify the queue to use for remote execution. This is overridden if the optimiz
Specify the remaining parameters, including the time limit per Task (minutes), period for checking the optimization (minutes), maximum number of jobs to launch, minimum and maximum number of iterations for each Task.
```python
# Optional: Limit the execution time of a single experiment, in minutes.
- # (this is optional, and if using OptimizerBOHB, it is ignored)
+ # (this is optional, and if using OptimizerBOHB, it is ignored)
time_limit_per_job=10.,
# Check the experiments every 6 seconds is way too often, we should probably set it to 5 min,
# assuming a single experiment is usually hours...
@@ -179,7 +179,7 @@ Specify the remaining parameters, including the time limit per Task (minutes), p
## Running as a Service
-The optimization can run as a service, if the `run_as_service` argument is set to `true`. For more information about
+The optimization can run as a service, if the `run_as_service` argument is set to `true`. For more information about
running as a service, see [Services Mode](../../../clearml_agent.md#services-mode).
```python
diff --git a/docs/guides/pipeline/pipeline_controller.md b/docs/guides/pipeline/pipeline_controller.md
index 4810faad..aeee8a87 100644
--- a/docs/guides/pipeline/pipeline_controller.md
+++ b/docs/guides/pipeline/pipeline_controller.md
@@ -69,7 +69,7 @@ The sections below describe in more detail what happens in the controller task a
where the first step’s artifact is fed into the second step.
Special pre-execution and post-execution logic is added for this step through the use of `pre_execute_callback`
- and `post_execute_callback` respectively.
+ and `post_execute_callback` respectively.
```python
pipe.add_step(
@@ -110,7 +110,7 @@ does the following:
remote_url='https://github.com/allegroai/events/raw/master/odsc20-east/generic/iris_dataset.pkl'
)
```
-1. Store the data as an artifact named `dataset` using [`Task.upload_artifact`](../../references/sdk/task.md#upload_artifact)
+1. Store the data as an artifact named `dataset` using [`Task.upload_artifact`](../../references/sdk/task.md#upload_artifact)
```python
# add and upload local file containing our toy dataset
task.upload_artifact('dataset', artifact_object=local_iris_pkl)
diff --git a/docs/guides/reporting/manual_matplotlib_reporting.md b/docs/guides/reporting/manual_matplotlib_reporting.md
index 4fc07c28..cc2e8034 100644
--- a/docs/guides/reporting/manual_matplotlib_reporting.md
+++ b/docs/guides/reporting/manual_matplotlib_reporting.md
@@ -7,7 +7,7 @@ example demonstrates using ClearML to log plots and images generated by Matplotl
## Plots
-The Matplotlib and Seaborn plots that are reported using the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
+The Matplotlib and Seaborn plots that are reported using the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
method appear in the experiment’s **PLOTS**.

diff --git a/docs/guides/services/aws_autoscaler.md b/docs/guides/services/aws_autoscaler.md
index 2433cc56..0715eada 100644
--- a/docs/guides/services/aws_autoscaler.md
+++ b/docs/guides/services/aws_autoscaler.md
@@ -23,7 +23,7 @@ The autoscaler services uses by default the `NVIDIA Deep Learning AMI v20.11.0-4
### Running the Script
:::info Self deployed ClearML server
-A template `AWS Auto-Scaler` task is available in the `DevOps Services` project.
+A template `AWS Auto-Scaler` task is available in the `DevOps Services` project.
You can clone it, adapt its [configuration](#configuration) to your needs, and enqueue it for execution directly from the ClearML UI.
:::
@@ -140,7 +140,7 @@ Execution log https://app.clear.ml/projects/142a598b5d234bebb37a57d692f5689f/exp
```
### Remote Execution
-Using the `--remote` command line option will enqueue the autoscaler to your [`services` queue](../../clearml_agent.md#services-mode)
+Using the `--remote` command line option will enqueue the autoscaler to your [`services` queue](../../clearml_agent.md#services-mode)
once the configuration wizard is complete:
```bash
diff --git a/docs/guides/set_offline.md b/docs/guides/set_offline.md
index 2bece98b..c7bdd565 100644
--- a/docs/guides/set_offline.md
+++ b/docs/guides/set_offline.md
@@ -8,7 +8,7 @@ local folder, which can be later uploaded to the [ClearML Server](../deploying_c
## Setting Task to Offline Mode
-Before initializing a Task, use the [Task.set_offline](../references/sdk/task.md#taskset_offline) class method and set the
+Before initializing a Task, use the [`Task.set_offline`](../references/sdk/task.md#taskset_offline) class method and set the
`offline_mode` argument to `True`.
:::caution
@@ -52,7 +52,7 @@ Upload the session's execution data that the Task captured offline to the ClearM
Pass the path to the zip folder containing the session with the `--import-offline-session` parameter.
-* [Task.import_offline_session](../references/sdk/task.md#taskimport_offline_session) method.
+* [`Task.import_offline_session`](../references/sdk/task.md#taskimport_offline_session) method.
```python
from clearml import Task
diff --git a/docs/pipelines/pipelines_sdk_tasks.md b/docs/pipelines/pipelines_sdk_tasks.md
index 54fb6698..47bba11c 100644
--- a/docs/pipelines/pipelines_sdk_tasks.md
+++ b/docs/pipelines/pipelines_sdk_tasks.md
@@ -132,7 +132,7 @@ As each function is transformed into an independently executed step, it needs to
all package imports inside the function are automatically logged as required packages for the pipeline step.
:::
-Function steps are added using the [`PipelineController.add_function_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
+Function steps are added using the [`PipelineController.add_function_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
method:
```python
diff --git a/docs/webapp/applications/apps_aws_autoscaler.md b/docs/webapp/applications/apps_aws_autoscaler.md
index bba8952c..7c0c47ab 100644
--- a/docs/webapp/applications/apps_aws_autoscaler.md
+++ b/docs/webapp/applications/apps_aws_autoscaler.md
@@ -52,7 +52,7 @@ each instance is spun up.
* EC2 Tags (Optional) - AWS instance tags to attach to launched EC2 instances. Insert key=value pairs, separated by
commas
* EBS Device (Optional) - Disk mount point
- * EBS Volume Size (Optional) - Disk size (GB)
+ * EBS Volume Size (Optional) - Disk size (GB)
* EBS Volume Type (Optional) - See [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html)
for full list of types
* Instance Key Pair (Optional) - AWS key pair that is provided to the spun EC2 instances for connecting to them via
diff --git a/docs/webapp/applications/apps_gcp_autoscaler.md b/docs/webapp/applications/apps_gcp_autoscaler.md
index 4016ccfa..72a48851 100644
--- a/docs/webapp/applications/apps_gcp_autoscaler.md
+++ b/docs/webapp/applications/apps_gcp_autoscaler.md
@@ -40,7 +40,7 @@ when each VM instance is spun up.
* Use Preemptible Instance - Choose whether VM instances of this type will be [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
* Max Number of Instances - Maximum number of concurrent running VM instances of this type allowed
* Monitored Queue - Queue associated with this VM instance type. The tasks enqueued to this queue will be executed on VM instances of this type
- * Machine Image (Optional) - The GCP machine image to launch
+ * Machine Image (Optional) - The GCP machine image to launch
* Disc Size (in GB) (Optional)
* \+ Add Item - Define another resource type
* **Autoscaler Instance Name** (Optional) - Name for the Autoscaler instance. This will appear in the instance list
diff --git a/docs/webapp/webapp_project_overview.md b/docs/webapp/webapp_project_overview.md
index b9fb1a4b..4f2d959f 100644
--- a/docs/webapp/webapp_project_overview.md
+++ b/docs/webapp/webapp_project_overview.md
@@ -29,7 +29,7 @@ or any network resource such as issue tracker, web repository, etc.
### Editing the Description
-To edit the description in the **OVERVIEW** tab, hover over the description section, and press the **EDIT** button that
+To edit the description in the **OVERVIEW** tab, hover over the description section, and press the **EDIT** button that
appears on the top right of the window.
When using the Markdown editor, you can make use of features such as bullets,