diff --git a/docs/tutorial/litellm.md b/docs/tutorial/litellm.md index f6c28c0..60a972e 100644 --- a/docs/tutorial/litellm.md +++ b/docs/tutorial/litellm.md @@ -2,12 +2,6 @@ **LiteLLM** supports a variety of APIs, both OpenAI-compatible and others. To integrate a new API model, follow these instructions: -*Ollama API (from inside Docker):* -![LiteLLM Config Ollama](/img/tutorial_litellm_ollama.png) - -*Gemini API (MakerSuite/AI Studio):* -![LiteLLM Config Gemini](/img/tutorial_litellm_gemini.png) - 1. Go to the Settings > Models > LiteLLM model management interface. 2. In 'Simple' mode, you will only see the option to enter a **Model**. 3. For additional configuration options, click on the 'Simple' toggle to switch to 'Advanced' mode. Here you can enter: @@ -18,4 +12,12 @@ 4. After entering all the required information, click the '+' button to add the new model to LiteLLM. +## Examples + +*Ollama API (from inside Docker):* +![LiteLLM Config Ollama](/img/tutorial_litellm_ollama.png) + +*Gemini API (MakerSuite/AI Studio):* +![LiteLLM Config Gemini](/img/tutorial_litellm_gemini.png) + For more information on the specific providers and advanced settings, consult the [LiteLLM Providers Documentation](https://litellm.vercel.app/docs/providers). \ No newline at end of file