2024-08-29 11:06:05 +00:00
# Mini-Omni
2024-08-30 02:55:54 +00:00
< p align = "center" > < strong style = "font-size: 18px;" >
2024-08-29 11:06:05 +00:00
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
2024-08-30 02:55:54 +00:00
< / strong >
< / p >
2024-08-29 11:06:05 +00:00
< p align = "center" >
2024-08-30 04:50:07 +00:00
🤗 < a href = "https://huggingface.co/gpt-omni/mini-omni" > Hugging Face< / a > | 📖 < a href = "https://github.com/gpt-omni/mini-omni" > Github< / a >
2024-08-30 02:55:54 +00:00
| 📑 < a href = "https://arxiv.org/abs/2408.16725" > Technical report< / a >
< / p >
2024-08-29 11:06:05 +00:00
2024-09-04 04:35:46 +00:00
Mini-Omni is an open-source multimodal large language model that can **hear, talk while thinking** . Featuring real-time end-to-end speech input and **streaming audio output** conversational capabilities.
2024-08-29 11:06:05 +00:00
< p align = "center" >
< img src = "data/figures/frameworkv3.jpg" width = "100%" / >
2024-08-30 02:55:54 +00:00
< / p >
2024-08-29 11:06:05 +00:00
## Features
✅ **Real-time speech-to-speech** conversational capabilities. No extra ASR or TTS models required.
✅ **Talking while thinking** , with the ability to generate text and audio at the same time.
2024-09-07 16:19:06 +00:00
✅ **Streaming audio output** capabilities.
2024-08-29 11:06:05 +00:00
✅ With "Audio-to-Text" and "Audio-to-Audio" **batch inference** to further boost the performance.
## Demo
2024-08-30 02:55:54 +00:00
NOTE: need to unmute first.
https://github.com/user-attachments/assets/03bdde05-9514-4748-b527-003bea57f118
2024-08-29 11:06:05 +00:00
## Install
Create a new conda environment and install the required packages:
```sh
conda create -n omni python=3.10
conda activate omni
git clone https://github.com/gpt-omni/mini-omni.git
cd mini-omni
pip install -r requirements.txt
```
## Quick start
**Interactive demo**
- start server
```sh
2024-09-07 10:21:16 +00:00
sudo apt-get install ffmpeg
2024-08-29 11:06:05 +00:00
conda activate omni
cd mini-omni
python3 server.py --ip '0.0.0.0' --port 60808
```
2024-09-09 03:41:59 +00:00
2024-08-29 11:06:05 +00:00
- run streamlit demo
NOTE: you need to run streamlit locally with PyAudio installed.
2024-09-09 03:41:59 +00:00
NOTE: For error: `ModuleNotFoundError: No module named 'utils.vad'` , please run `export PYTHONPATH=./` first.
2024-08-29 11:06:05 +00:00
```sh
pip install PyAudio==0.2.14
API_URL=http://0.0.0.0:60808/chat streamlit run webui/omni_streamlit.py
```
- run gradio demo
```sh
API_URL=http://0.0.0.0:60808/chat python3 webui/omni_gradio.py
```
example:
2024-08-30 02:55:54 +00:00
NOTE: need to unmute first. Gradio seems can not play audio stream instantly, so the latency feels a bit longer.
https://github.com/user-attachments/assets/29187680-4c42-47ff-b352-f0ea333496d9
2024-08-29 11:06:05 +00:00
**Local test**
```sh
conda activate omni
cd mini-omni
# test run the preset audio samples and questions
python inference.py
```
2024-09-09 04:02:32 +00:00
## Common issues
- Error: `ModuleNotFoundError: No module named 'utils.xxxx'`
Answer: run `export PYTHONPATH=./` first.
2024-08-29 11:06:05 +00:00
## Acknowledgements
- [Qwen2 ](https://github.com/QwenLM/Qwen2/ ) as the LLM backbone.
- [litGPT ](https://github.com/Lightning-AI/litgpt/ ) for training and inference.
- [whisper ](https://github.com/openai/whisper/ ) for audio encoding.
- [snac ](https://github.com/hubertsiuzdak/snac/ ) for audio decoding.
- [CosyVoice ](https://github.com/FunAudioLLM/CosyVoice ) for generating synthetic speech.
2024-08-30 02:55:54 +00:00
- [OpenOrca ](https://huggingface.co/datasets/Open-Orca/OpenOrca ) and [MOSS ](https://github.com/OpenMOSS/MOSS/tree/main ) for alignment.
2024-09-03 13:35:15 +00:00
## Star History
[![Star History Chart ](https://api.star-history.com/svg?repos=gpt-omni/mini-omni&type=Date )](https://star-history.com/#gpt-omni/mini-omni& Date)