mirror of
https://github.com/deepseek-ai/FlashMLA
synced 2025-06-26 18:15:54 +00:00
94 lines
2.6 KiB
Markdown
94 lines
2.6 KiB
Markdown
# FlashMLA
|
|
|
|
FlashMLA is an efficient MLA decoding kernel for Hopper GPUs, optimized for variable-length sequences serving.
|
|
|
|
Currently released:
|
|
- BF16, FP16, E4M3
|
|
- Paged kvcache with block size of 64
|
|
|
|
## Quick start
|
|
|
|
### Install
|
|
|
|
```bash
|
|
python setup.py install
|
|
```
|
|
|
|
### Benchmark
|
|
|
|
```bash
|
|
python tests/test_flash_mla.py
|
|
```
|
|
|
|
Achieving up to 3000 GB/s in memory-bound configuration and 580 TFLOPS in computation-bound configuration on H800 SXM5, using CUDA 12.8.
|
|
|
|
### Usage
|
|
|
|
```python
|
|
from flash_mla import get_mla_metadata, flash_mla_with_kvcache
|
|
|
|
tile_scheduler_metadata, num_splits = get_mla_metadata(cache_seqlens, s_q * h_q // h_kv, h_kv)
|
|
|
|
for i in range(num_layers):
|
|
...
|
|
o_i, lse_i = flash_mla_with_kvcache(
|
|
q_i, kvcache_i, block_table, cache_seqlens, dv,
|
|
tile_scheduler_metadata, num_splits, causal=True,
|
|
)
|
|
...
|
|
```
|
|
|
|
## Requirements
|
|
|
|
- Hopper GPUs
|
|
- CUDA 12.3 and above
|
|
- **But we highly recommend 12.8 or above for the best performance**
|
|
- PyTorch 2.0 and above
|
|
|
|
## Acknowledgement
|
|
|
|
FlashMLA is inspired by [FlashAttention 2&3](https://github.com/dao-AILab/flash-attention/) and [cutlass](https://github.com/nvidia/cutlass) projects.
|
|
|
|
## Community Support
|
|
|
|
### MetaX
|
|
For MetaX GPUs, visit the official website: [MetaX](https://www.metax-tech.com).
|
|
|
|
The corresponding FlashMLA version can be found at: [MetaX-MACA/FlashMLA](https://github.com/MetaX-MACA/FlashMLA)
|
|
|
|
|
|
### Moore Threads
|
|
For the Moore Threads GPU, visit the official website: [Moore Threads](https://www.mthreads.com/).
|
|
|
|
The corresponding FlashMLA version is available on GitHub: [MooreThreads/MT-flashMLA](https://github.com/MooreThreads/MT-flashMLA).
|
|
|
|
|
|
### Hygon DCU
|
|
For the Hygon DCU, visit the official website: [Hygon Developer](https://developer.sourcefind.cn/).
|
|
|
|
The corresponding FlashMLA version is available here: [OpenDAS/MLAttention](https://developer.sourcefind.cn/codes/OpenDAS/MLAttention).
|
|
|
|
|
|
### Intellifusion
|
|
For the Intellifusion NNP, visit the official website: [Intellifusion](https://www.intellif.com).
|
|
|
|
The corresponding FlashMLA version is available on Gitee: [Intellifusion/tyllm](https://gitee.com/Intellifusion_2025/tyllm/blob/master/python/tylang/flash_mla.py).
|
|
|
|
|
|
### Iluvatar Corex
|
|
For Iluvatar Corex GPUs, visit the official website: [Iluvatar Corex](https://www.iluvatar.com).
|
|
|
|
The corresponding FlashMLA version is available on GitHub: [Deep-Spark/FlashMLA](https://github.com/Deep-Spark/FlashMLA/tree/iluvatar_flashmla)
|
|
|
|
## Citation
|
|
|
|
```bibtex
|
|
@misc{flashmla2025,
|
|
title={FlashMLA: Efficient MLA decoding kernels},
|
|
author={Jiashi Li},
|
|
year={2025},
|
|
publisher = {GitHub},
|
|
howpublished = {\url{https://github.com/deepseek-ai/FlashMLA}},
|
|
}
|
|
```
|