mirror of
https://github.com/gpt-omni/mini-omni
synced 2024-11-21 23:37:38 +00:00
Update README.md
This commit is contained in:
parent
cd7ec672eb
commit
5f798575b6
@ -11,7 +11,7 @@ Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
|
|||||||
| 📑 <a href="https://arxiv.org/abs/2408.16725">Technical report</a>
|
| 📑 <a href="https://arxiv.org/abs/2408.16725">Technical report</a>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
Mini-Omni is an open-source multimodel large language model that can **hear, talk while thinking**. Featuring real-time end-to-end speech input and **streaming audio output** conversational capabilities.
|
Mini-Omni is an open-source multimodal large language model that can **hear, talk while thinking**. Featuring real-time end-to-end speech input and **streaming audio output** conversational capabilities.
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="data/figures/frameworkv3.jpg" width="100%"/>
|
<img src="data/figures/frameworkv3.jpg" width="100%"/>
|
||||||
|
Loading…
Reference in New Issue
Block a user