mirror of
https://github.com/gpt-omni/mini-omni
synced 2024-11-25 05:21:39 +00:00
Update README.md
This commit is contained in:
parent
40ba0c7c3a
commit
6f06a106d7
23
README.md
23
README.md
@ -1,20 +1,21 @@
|
|||||||
|
|
||||||
# Mini-Omni
|
# Mini-Omni
|
||||||
|
|
||||||
<p align="center">
|
<p align="center"><strong style="font-size: 18px;">
|
||||||
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
|
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
|
||||||
<p>
|
</strong>
|
||||||
|
</p>
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
🤗 <a href="">Hugging Face</a> | 📖 <a href="https://github.com/gpt-omni/mini-omni">Github</a>
|
🤗 <a href="">Hugging Face</a> | 📖 <a href="https://github.com/gpt-omni/mini-omni">Github</a>
|
||||||
| 📑 <a href="">Technical report (coming soon)</a>
|
| 📑 <a href="https://arxiv.org/abs/2408.16725">Technical report</a>
|
||||||
<p>
|
</p>
|
||||||
|
|
||||||
Mini-Omni is an open-source multimodel large language model that can **hear, talk while thinking**. Featuring real-time end-to-end speech input and **streaming audio output** conversational capabilities.
|
Mini-Omni is an open-source multimodel large language model that can **hear, talk while thinking**. Featuring real-time end-to-end speech input and **streaming audio output** conversational capabilities.
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="data/figures/frameworkv3.jpg" width="100%"/>
|
<img src="data/figures/frameworkv3.jpg" width="100%"/>
|
||||||
<p>
|
</p>
|
||||||
|
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
@ -29,7 +30,10 @@ Mini-Omni is an open-source multimodel large language model that can **hear, tal
|
|||||||
|
|
||||||
## Demo
|
## Demo
|
||||||
|
|
||||||
![](./data/demo_streamlit.mov)
|
NOTE: need to unmute first.
|
||||||
|
|
||||||
|
https://github.com/user-attachments/assets/03bdde05-9514-4748-b527-003bea57f118
|
||||||
|
|
||||||
|
|
||||||
## Install
|
## Install
|
||||||
|
|
||||||
@ -71,7 +75,10 @@ API_URL=http://0.0.0.0:60808/chat python3 webui/omni_gradio.py
|
|||||||
|
|
||||||
example:
|
example:
|
||||||
|
|
||||||
![](./data/demo_gradio.mov)
|
NOTE: need to unmute first. Gradio seems can not play audio stream instantly, so the latency feels a bit longer.
|
||||||
|
|
||||||
|
https://github.com/user-attachments/assets/29187680-4c42-47ff-b352-f0ea333496d9
|
||||||
|
|
||||||
|
|
||||||
**Local test**
|
**Local test**
|
||||||
|
|
||||||
@ -89,4 +96,4 @@ python inference.py
|
|||||||
- [whisper](https://github.com/openai/whisper/) for audio encoding.
|
- [whisper](https://github.com/openai/whisper/) for audio encoding.
|
||||||
- [snac](https://github.com/hubertsiuzdak/snac/) for audio decoding.
|
- [snac](https://github.com/hubertsiuzdak/snac/) for audio decoding.
|
||||||
- [CosyVoice](https://github.com/FunAudioLLM/CosyVoice) for generating synthetic speech.
|
- [CosyVoice](https://github.com/FunAudioLLM/CosyVoice) for generating synthetic speech.
|
||||||
- [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) and [MOSS](https://github.com/OpenMOSS/MOSS/tree/main) for alignment.
|
- [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) and [MOSS](https://github.com/OpenMOSS/MOSS/tree/main) for alignment.
|
||||||
|
Loading…
Reference in New Issue
Block a user