mirror of
https://github.com/open-webui/docs
synced 2025-06-16 11:28:36 +00:00
Update openedai-speech-integration.md
This commit is contained in:
parent
61fe80f8bb
commit
bea96c094e
@ -110,6 +110,8 @@ Under `TTS Voice` within the same audio settings menu in the admin panel, you ca
|
||||
* `tts-1-hd` via [Coqui AI/TTS](https://github.com/coqui-ai/TTS) XTTS v2 voice cloning (fast, but requires around 4GB GPU VRAM & Nvidia GPU with CUDA): Custom cloned voices can be used for `tts-1-hd`. See: [Custom Voices Howto](https://github.com/matatonic/openedai-speech/blob/main/docs/custom_voices.md)
|
||||
+ [Multilingual Support](https://github.com/matatonic/openedai-speech#multilingual) with XTTS voices
|
||||
|
||||
* Beta [parler-tts](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) support (you can describe very basic features of the speaker voice), See: (https://www.text-description-to-speech.com/) for some examples of how to describe voices. Voices can be defined in the `voice_to_speaker.default.yaml`. Two example [parler-tts](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) voices are included in the `voice_to_speaker.default.yaml` file. `parler-tts` is experimental software and is on the slower side. The exact voice will be slightly different each generation but should be similar to the basic description.
|
||||
|
||||
**Step 7: Press `Save` to apply the changes**
|
||||
-----------------------------------------
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user