Update README

This commit is contained in:
alnoam 2024-03-11 13:53:54 +02:00
parent d7e34002ce
commit 195658af0e

View File

@ -1,6 +1,12 @@
<div align="center">
# 🚀 🔥 Fractional GPU! ⚡ 📣 # 🚀 🔥 Fractional GPU! ⚡ 📣
## Run multiple containers on the same GPU with driver level memory limitation ✨ and compute time-slicing 🎊 ## Run multiple containers on the same GPU with driver level memory limitation ✨ and compute time-slicing 🎊
`🌟 Leave a star to support the project! 🌟`
</div>
## 🔰 Introduction ## 🔰 Introduction
Sharing high-end GPUs or even prosumer & consumer GPUs between multiple users is the most cost-effective Sharing high-end GPUs or even prosumer & consumer GPUs between multiple users is the most cost-effective
@ -45,6 +51,7 @@ Here is an example output from A100 GPU:
+---------------------------------------------------------------------------------------+ +---------------------------------------------------------------------------------------+
``` ```
### 🐳 Containers
| Memory Limit | CUDA Ver | Ubuntu Ver | Docker Image | | Memory Limit | CUDA Ver | Ubuntu Ver | Docker Image |
|:-------------:|:--------:|:----------:|:----------------------------------------:| |:-------------:|:--------:|:----------:|:----------------------------------------:|
@ -84,12 +91,12 @@ processes and other host processes when limiting memory / utilization usage
Build your own containers and inherit form the original containers Build your own containers and inherit form the original containers
You can find a few examples [here](https://github.com/allegroai/clearml-fractional-gpu/tree/main/examples). You can find a few examples [here](https://github.com/allegroai/clearml-fractional-gpu/docker-examples).
## 🌸 Implications ## ☸ Kubernetes
Our fractional GPU containers can be used on bare-metal executions as well as Kubernetes PODs. Fractional GPU containers can be used on bare-metal executions as well as Kubernetes PODs.
Yes! By using one our Fractional GPU containers you can limit the memory consumption your Job/Pod and Yes! By using one the Fractional GPU containers you can limit the memory consumption your Job/Pod and
allow you to easily share GPUs without fearing they will memory crash one another! allow you to easily share GPUs without fearing they will memory crash one another!
Here's a simple Kubernetes POD template: Here's a simple Kubernetes POD template:
@ -123,7 +130,9 @@ processes and other host processes when limiting memory / utilization usage
The containers support Nvidia drivers <= `545.x.x` The containers support Nvidia drivers <= `545.x.x`
We will keep updating & supporting new drivers as they continue to be released We will keep updating & supporting new drivers as they continue to be released
**Supported GPUs**: GTX series 10, 20, 30, 40, RTX A series, and Data-Center P100, A100, A10/A40, L40/s, H100 **Supported GPUs**: RTX series 10, 20, 30, 40, A series, and Data-Center P100, A100, A10/A40, L40/s, H100
**Limitations**: Windows Host machines are currently not supported, if this is important for you, leave a request in the [Issues](/issues) section
## ❓ FAQ ## ❓ FAQ