mirror of
https://github.com/open-webui/docs
synced 2025-06-16 11:28:36 +00:00
23 lines
1.3 KiB
Markdown
23 lines
1.3 KiB
Markdown
# LiteLLM Config
|
|
|
|
**LiteLLM** supports a variety of APIs, both OpenAI-compatible and others. To integrate a new API model, follow these instructions:
|
|
|
|
1. Go to the Settings > Models > LiteLLM model management interface.
|
|
2. In 'Simple' mode, you will only see the option to enter a **Model**.
|
|
3. For additional configuration options, click on the 'Simple' toggle to switch to 'Advanced' mode. Here you can enter:
|
|
- **Model Name**: The name of the model as you want it to appear in the models list.
|
|
- **API Base URL**: The base URL for your API provider. This field can usually be left blank unless your provider specifies a custom endpoint URL.
|
|
- **API Key**: Your unique API key. Replace with the key provided by your API provider.
|
|
- **API RPM**: The allowed requests per minute for your API. Replace with the appropriate value for your API plan.
|
|
|
|
4. After entering all the required information, click the '+' button to add the new model to LiteLLM.
|
|
|
|
## Examples
|
|
|
|
*Ollama API (from inside Docker):*
|
|

|
|
|
|
*Gemini API (MakerSuite/AI Studio):*
|
|

|
|
|
|
For more information on the specific providers and advanced settings, consult the [LiteLLM Providers Documentation](https://litellm.vercel.app/docs/providers). |