A Task is also automatically assigned an auto-generated unique identifier (UUID string) that cannot be changed and will always locate the same Task in the system.
ClearML supports logging `argparse` module arguments out of the box, so once integrating it into the code, it will automatically log all parameters provided to the argument parser.
It's also possible to log parameter dictionaries (very useful when parsing an external config file and storing as a dict object),
whole configuration files or even custom objects or [Hydra](https://hydra.cc/docs/intro/) configurations!
```python
params_dictionary = {'epochs': 3, 'lr': 0.4}
task.connect(params_dictionary)
```
Check [this](../../fundamentals/hyperparameters.md) out for all Hyperparameter logging options.
## Log Artifacts
ClearML allows you to easily store the output products of an experiment - Model snapshot \ weights file, a preprocessing of your data, feature representation of data and more!
From now on, whenever the framework (TF/Keras/PyTorch etc.) will be storing a snapshot, the model file will automatically get uploaded to our bucket under a specific folder for the experiment.
Like before we have to get the instance of the Task training the original weights files, then we can query the task for its output models (a list of snapshots), and get the latest snapshot.
As with Artifacts all models are cached, meaning the next time we will run this code, no model will need to be downloaded.
Once one of the frameworks will load the weights file, the running Task will be automatically updated with “Input Model” pointing directly to the original training Task’s Model.
This feature allows you to easily get a full genealogy of every trained and used model by your system!
## Log Metrics
Full metrics logging is the key to finding the best performing model!
Once everything is neatly logged and displayed, using the [comparison tool](../../webapp/webapp_exp_comparing) makes it easy to find the best configuration!
## Track Experiments
The experiment table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
![image](../../img/webapp_exp_table_01.png)
### Creating Leaderboards
The [experiments table](../../webapp/webapp_exp_table.md) can be customized to your own needs, adding desired views of parameters, metrics and tags.
It's possible to filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.
It can also be used as a live leaderboard, showing the best performing experiments' status, updated in real time.
This is helpful to monitor your projects' progress, and share it across the organization.
Any page is sharable by copying the URL from the address bar, allowing you to bookmark leaderboards or send an exact view of a specific experiment or a comparison view.