FlashMLA/README.md
2025-02-24 09:20:23 +08:00

62 lines
1.3 KiB
Markdown

# FlashMLA
FlashMLA is an efficient MLA decoding kernel for Hopper GPUs, optimized for variable-length sequences serving.
Currently released:
- BF16
- Paged kvcache with block size of 64
## Quick start
### Install
```bash
python setup.py install
```
### Benchmark
```bash
python tests/test_flash_mla.py
```
Achieving up to 3000 GB/s in memory-bound configuration and 580 TFLOPS in computation-bound configuration on H800 SXM5, using CUDA 12.6.
### Usage
```python
from flash_mla import get_mla_metadata, flash_mla_with_kvcache
tile_scheduler_metadata, num_splits = get_mla_metadata(cache_seqlens, s_q * h_q // h_kv, h_kv)
for i in range(num_layers):
...
o_i, lse_i = flash_mla_with_kvcache(
q_i, kvcache_i, block_table, cache_seqlens, dv,
tile_scheduler_metadata, num_splits, causal=True,
)
...
```
## Requirements
- Hopper GPUs
- CUDA 12.3 and above
- PyTorch 2.0 and above
## Acknowledgement
FlashMLA is inspired by [FlashAttention 2&3](https://github.com/dao-AILab/flash-attention/) and [cutlass](https://github.com/nvidia/cutlass) projects.
## Citation
```bibtex
@misc{flashmla2025,
title={FlashMLA: Efficient MLA decoding kernel},
author={Jiashi Li},
year={2025},
publisher = {GitHub},
howpublished = {\url{https://github.com/deepseek-ai/FlashMLA}},
}
```