> You must execute the container with `--pid=host` !
> [!NOTE]
>
> **`--pid=host`** is required to allow the driver to differentiate between the container's
processes and other host processes when limiting memory / utilization usage
> [!TIP]
>
> **[ClearML-Agent](https://clear.ml/docs/latest/docs/clearml_agent/) users add `[--pid=host]` to your `agent.extra_docker_arguments` section in your [config file](https://github.com/allegroai/clearml-agent/blob/c9fc092f4eea9c3890d582aa2a098c3c2f39ce72/docs/clearml.conf#L190)**
## 🔩 Customization
Build your own containers inheriting from the original containers
You can find a few examples [here](https://github.com/allegroai/clearml-fractional-gpu/examples).
## 🌸 Implications
Our fractional GPU containers can be used on bare-metal executions as well as Kubernetes PODs.
Yes! By using one our Fractional GPU containers you can limit the memory consumption your Job/Pod and
allow you to easily share GPUs without fearing they will memory crash one another!
> **`hostPID: true`** is required to allow the driver to differentiate between the pod's
processes and other host processes when limiting memory / utilization usage
## 🔌 Support & Limitations
The containers support Nvidia drivers <= `545.x.x`
We will keep updating & supporting new drivers as they continue to be released
**Supported GPUs**: GTX series 10, 20, 30, 40, RTX A series, and Data-Center P100, A100, A10/A40, L40/s, H100
## ❓ FAQ
- **Q**: Will running `nvidia-smi` inside the container report the local processes' GPU consumption? <br>
**A**: Yes, `nvidia-smi` is communicating directly with the low-level drivers and reports both accurate container GPU memory as well as the container local memory limitation.<br>
Notice GPU utilization will be the global (i.e. host side) GPU utilization and not the specific local container GPU utilization.
- **Q**: How do I make sure my Python / Pytorch / Tensorflow are actually memory limited <br>
**A**: We are sure a malicious user will find a way. It was never our intention to protect against malicious users,
if you have a malicious user with access to your machines, fractional gpus are not your number 1 problem 😃
- **Q**: How can I programmatically detect the memory limitation?
**A**: You can check the OS environment variable `GPU_MEM_LIMIT_GB`.
Notice that changing it will not remove or modify the limitation.
- **Q**: Is running the container **with**`--pid=host` secure / safe?
**A**: It should be both secure and safe. The main caveat from a security perspective is that
a container process can see any command line running on the host system.
If a process command line contains a "secret" then yes, this might become a potential data leak.
Notice that passing "secrets" in command line is ill-advised, and hence we do not consider it a security risk.
That said if security is key, the enterprise edition (see below) eliminates the need to run with `pid-host` and is thus fully secure
- **Q**: Can you run the container **without**`--pid=host` ?
**A**: You can! but you will have to use the enterprise version of the clearml-fractional-gpu container
(otherwise the memory limit is applied system wide instead of container wide). If this feature is important for you, please contact [ClearML sales & support](https://clear.ml/contact-us)
## 📄 License
Usage license is granted for **personal**, **research**, **development** or **educational** purposes only.
Commercial license is available as part of the [ClearML commercial solution](https://clear.ml)
## 🤖 Commercial & Enterprise version
ClearML offers enterprise and commercial license adding many additional features on top of fractional GPUs,
these include orchestration, priority queues, quota management, compute cluster dashboard,
dataset management & experiment management, as well as enterprise grade security and support.
Learn more about [ClearML Orchestration](https://clear.ml) or talk to us directly at [ClearML sales](https://clear.ml/contact-us)