diff --git a/docs/features/chat-features/conversation-organization.md b/docs/features/chat-features/conversation-organization.md index 11c9d21..b372eca 100644 --- a/docs/features/chat-features/conversation-organization.md +++ b/docs/features/chat-features/conversation-organization.md @@ -12,7 +12,7 @@ Folders allow you to group related conversations together for quick access and b - **Creating a Folder**: You can create a new folder to store specific conversations. This is useful if you want to keep conversations of a similar topic or purpose together. - **Moving Conversations into Folders**: Conversations can be moved into folders by dragging and dropping them. This allows you to structure your workspace in a way that suits your workflow. - + ### Example Use Case @@ -25,7 +25,7 @@ If you are managing multiple projects, you can create separate folders for each Tags provide an additional layer of organization by allowing you to label conversations with keywords or phrases. - **Adding Tags to Conversations**: Tags can be applied to conversations based on their content or purpose. Tags are flexible and can be added or removed as needed. - + - **Using Tags for Searching**: Tags make it easy to locate specific conversations by using the search feature. You can filter conversations by tags to quickly find those related to specific topics. ### Example Use Case diff --git a/docs/features/evaluation/index.mdx b/docs/features/evaluation/index.mdx index b59ffe0..fbcc990 100644 --- a/docs/features/evaluation/index.mdx +++ b/docs/features/evaluation/index.mdx @@ -59,11 +59,11 @@ For your feedback to affect the leaderboard, you need what’s called a **siblin Here’s a sneak peek at how the Arena Model interface works: - + Need more depth? You can even replicate a [**Chatbot Arena**](https://lmarena.ai/)-style setup! - + ### **2. Normal Interaction** @@ -71,11 +71,11 @@ No need to switch to “arena mode” if you don't want to. You can use Open Web For instance, this is how you can rate during a normal interaction: - + And here's an example of setting up a multi-model comparison, similar to an arena: - + --- @@ -85,7 +85,7 @@ After rating, check out the **Leaderboard** under the Admin Panel. This is where This is a sample leaderboard layout: - + ### Topic-Based Reranking @@ -100,7 +100,7 @@ Don't skip this! Tagging is super powerful because it allows you to **re-rank mo Here’s an example of how re-ranking looks: - + --- diff --git a/docs/features/plugin/functions/action.mdx b/docs/features/plugin/functions/action.mdx index 969eeed..e276763 100644 --- a/docs/features/plugin/functions/action.mdx +++ b/docs/features/plugin/functions/action.mdx @@ -14,7 +14,7 @@ An example of a graph visualization Action can be seen in the video below.
diff --git a/docs/getting-started/quick-start/starting-with-ollama.mdx b/docs/getting-started/quick-start/starting-with-ollama.mdx index 88a3619..f1c5faf 100644 --- a/docs/getting-started/quick-start/starting-with-ollama.mdx +++ b/docs/getting-started/quick-start/starting-with-ollama.mdx @@ -27,9 +27,9 @@ To manage your Ollama instance in Open WebUI, follow these steps: Here’s what the management screen looks like: - + - + ## A Quick and Efficient Way to Download Models @@ -38,7 +38,7 @@ If you’re looking for a faster option to get started, you can download models Here’s an example of how it works: - + This method is perfect if you want to skip navigating through the Admin Settings menu and get right to using your models. diff --git a/docs/intro.mdx b/docs/intro.mdx index 72f3b0b..ec3dd99 100644 --- a/docs/intro.mdx +++ b/docs/intro.mdx @@ -24,7 +24,7 @@ import { SponsorList } from "@site/src/components/SponsorList"; [](https://discord.gg/5rJgQTnV4s) [](https://github.com/sponsors/tjbck) - + ## Quick Start with Docker 🐳 diff --git a/docs/pipelines/filters.md b/docs/pipelines/filters.md index c4a98e5..490901a 100644 --- a/docs/pipelines/filters.md +++ b/docs/pipelines/filters.md @@ -9,7 +9,7 @@ Filters are used to perform actions against incoming user messages and outgoing diff --git a/docs/pipelines/index.mdx b/docs/pipelines/index.mdx index 9cfda58..bd99661 100644 --- a/docs/pipelines/index.mdx +++ b/docs/pipelines/index.mdx @@ -5,7 +5,7 @@ title: "⚡ Pipelines" @@ -37,7 +37,7 @@ Welcome to **Pipelines**, an [Open WebUI](https://github.com/open-webui) initiat diff --git a/docs/pipelines/pipes.md b/docs/pipelines/pipes.md index 3c75b75..aa31e7c 100644 --- a/docs/pipelines/pipes.md +++ b/docs/pipelines/pipes.md @@ -8,7 +8,7 @@ Pipes are functions that can be used to perform actions prior to returning LLM m @@ -16,6 +16,6 @@ Pipes that are defined in your WebUI show up as a new model with an "External" d diff --git a/docs/tutorials/images.md b/docs/tutorials/images.md index 88e346c..2b71a78 100644 --- a/docs/tutorials/images.md +++ b/docs/tutorials/images.md @@ -160,7 +160,7 @@ Using Azure OpenAI Dall-E directly is unsupported, but you can [set up a LiteLLM ## Using Image Generation - + 1. First, use a text generation model to write a prompt for image generation. 2. After the response has finished, you can click the Picture icon to generate an image. diff --git a/docs/tutorials/integrations/deepseekr1-dynamic.md b/docs/tutorials/integrations/deepseekr1-dynamic.md index e69de29..a94c78b 100644 --- a/docs/tutorials/integrations/deepseekr1-dynamic.md +++ b/docs/tutorials/integrations/deepseekr1-dynamic.md @@ -0,0 +1,169 @@ +--- +sidebar_position: 1 +title: "🦥 Run DeepSeek R1 Dynamic 1.58-bit with Llama.cpp" +--- + +A huge shoutout to **UnslothAI** for their incredible efforts! Thanks to their hard work, we can now run the **full DeepSeek-R1** 671B parameter model in its dynamic 1.58-bit quantized form (compressed to just 131GB) on **Llama.cpp**! And the best part? You no longer have to despair about needing massive enterprise-class GPUs or servers — it’s possible to run this model on your personal machine (albeit slowly for most consumer hardware). + +:::note +The only true **DeepSeek-R1** model on Ollama is the **671B version** available here: [https://ollama.com/library/deepseek-r1:671b](https://ollama.com/library/deepseek-r1:671b). Other versions are **distilled** models. +::: + +This guide focuses on running the **full DeepSeek-R1 Dynamic 1.58-bit quantized model** using **Llama.cpp** integrated with **Open WebUI**. For this tutorial, we’ll demonstrate the steps with an **M4 Max + 128GB RAM** machine. You can adapt the settings to your own configuration. + +--- + +## Step 1: Install Llama.cpp + +You can either: +- [Download the prebuilt binaries](https://github.com/ggerganov/llama.cpp/releases) +- **Or build it yourself**: Follow the instructions here: [Llama.cpp Build Guide](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md) + +## Step 2: Download the Model Provided by UnslothAI + +Head over to [Unsloth’s Hugging Face page](https://huggingface.co/unsloth/DeepSeek-R1-GGUF) and download the appropriate **dynamic quantized version** of DeepSeek-R1. For this tutorial, we’ll use the **1.58-bit (131GB)** version, which is highly optimized yet remains surprisingly functional. + + +:::tip +Know your "working directory" — where your Python script or terminal session is running. The model files will download to a subfolder of that directory by default, so be sure you know its path! For example, if you're running the command below in `/Users/yourname/Documents/projects`, your downloaded model will be saved under `/Users/yourname/Documents/projects/DeepSeek-R1-GGUF`. +::: + +To understand more about UnslothAI’s development process and why these dynamic quantized versions are so efficient, check out their blog post: [UnslothAI DeepSeek R1 Dynamic Quantization](https://unsloth.ai/blog/deepseekr1-dynamic). + +Here’s how to download the model programmatically: +```python +# Install Hugging Face dependencies before running this: +# pip install huggingface_hub hf_transfer + +from huggingface_hub import snapshot_download + +snapshot_download( + repo_id = "unsloth/DeepSeek-R1-GGUF", # Specify the Hugging Face repo + local_dir = "DeepSeek-R1-GGUF", # Model will download into this directory + allow_patterns = ["*UD-IQ1_S*"], # Only download the 1.58-bit version +) +``` + +Once the download completes, you’ll find the model files in a directory structure like this: +``` +DeepSeek-R1-GGUF/ +├── DeepSeek-R1-UD-IQ1_S/ +│ ├── DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf +│ ├── DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf +│ ├── DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf +``` + +:::info +🛠️ Update paths in the later steps to **match your specific directory structure**. For example, if your script was in `/Users/tim/Downloads`, the full path to the GGUF file would be: +`/Users/tim/Downloads/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf`. +::: + +## Step 3: Make Sure Open WebUI is Installed and Running + +If you don’t already have **Open WebUI** installed, no worries! It’s a simple setup. Just follow the [Open WebUI documentation here](https://docs.openwebui.com/). Once installed, start the application — we’ll connect it in a later step to interact with the DeepSeek-R1 model. + + +## Step 4: Serve the Model Using Llama.cpp + +Now that the model is downloaded, the next step is to run it using **Llama.cpp’s server mode**. Before you begin: + +1. **Locate the `llama-server` binary.** + If you built from source (as outlined in Step 1), the `llama-server` executable will be located in `llama.cpp/build/bin`. Navigate to this directory by using the `cd` command: + ```bash + cd [path-to-llama-cpp]/llama.cpp/build/bin + ``` + + Replace `[path-to-llama-cpp]` with the location where you cloned or built Llama.cpp. For example: + ```bash + cd ~/Documents/workspace/llama.cpp/build/bin + ``` + +2. **Point to your model folder.** + Use the full path to the downloaded GGUF files created in Step 2. When serving the model, specify the first part of the split GGUF files (e.g., `DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf`). + +Here’s the command to start the server: +```bash +./llama-server \ + --model /[your-directory]/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ + --port 10000 \ + --ctx-size 1024 \ + --n-gpu-layers 40 +``` + +> 🔑 **Parameters to Customize Based on Your Machine:** +> - **`--model`:** Replace `/[your-directory]/` with the path where the GGUF files were downloaded in Step 2. +> - **`--port`:** The server default is `8080`, but feel free to change it based on your port availability. +> - **`--ctx-size`:** Determines context length (number of tokens). You can increase it if your hardware allows, but be cautious of rising RAM/VRAM usage. +> - **`--n-gpu-layers`:** Set the number of layers you want to offload to your GPU for faster inference. The exact number depends on your GPU’s memory capacity — reference Unsloth’s table for specific recommendations. For CPU-only setups, set it to `0`. + +For example, if your model was downloaded to `/Users/tim/Documents/workspace` and you have an RTX 4090 GPU with 24GB VRAM, your command would look like this: +```bash +./llama-server \ + --model /Users/tim/Documents/workspace/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ + --port 10000 \ + --ctx-size 1024 \ + --n-gpu-layers 40 +``` + +Once the server starts, it will host a **local OpenAI-compatible API** endpoint at: +``` +http://127.0.0.1:10000 +``` + +:::info +🖥️ **Llama.cpp Server Running** + + + +After running the command, you should see a message confirming the server is active and listening on port 10000. +::: + +Be sure to **keep this terminal session running**, as it serves the model for all subsequent steps. + +## Step 5: Connect Llama.cpp to Open WebUI + +1. Go to **Admin Settings** in Open WebUI. +2. Navigate to **Connections > OpenAI Connections.** +3. Add the following details for the new connection: + - URL: `http://127.0.0.1:10000/v1` + - API Key: `none` + +:::info +🖥️ **Adding Connection in Open WebUI** + + + +After running the command, you should see a message confirming the server is active and listening on port 10000. +::: + +Once the connection is saved, you can start querying **DeepSeek-R1** directly from Open WebUI! 🎉 + +--- + +## Example: Generating Responses + +You can now use Open WebUI’s chat interface to interact with the **DeepSeek-R1 Dynamic 1.58-bit model**. + +:::info +🖥️ **DeepSeek-R1 Response in Open WebUI** + + +::: + +--- + +## Notes and Considerations + +- **Performance:** + Running a massive 131GB model like DeepSeek-R1 on personal hardware will be **slow**. Even with our M4 Max (128GB RAM), inference speeds were modest. But the fact that it works at all is a testament to UnslothAI’s optimizations. + +- **VRAM/Memory Requirements:** + Ensure sufficient VRAM and system RAM for optimal performance. With low-end GPUs or CPU-only setups, expect slower speeds (but it’s still doable!). + +--- + +Thanks to **UnslothAI** and **Llama.cpp**, running one of the largest open-source reasoning models, **DeepSeek-R1** (1.58-bit version), is finally accessible to individuals. While it’s challenging to run such models on consumer hardware, the ability to do so without massive computational infrastructure is a significant technological milestone. + +⭐ Big thanks to the community for pushing the boundaries of open AI research. + +Happy experimenting! 🚀 diff --git a/docs/tutorials/tips/contributing-tutorial.md b/docs/tutorials/tips/contributing-tutorial.md index b620411..7a169cd 100644 --- a/docs/tutorials/tips/contributing-tutorial.md +++ b/docs/tutorials/tips/contributing-tutorial.md @@ -60,7 +60,7 @@ b. **Modify `docusaurus.config.ts` to Use Environment Variables** const config: Config = { title: "Open WebUI", tagline: "ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI)", - favicon: "img/favicon.png", + favicon: "images/favicon.png", url: process.env.SITE_URL || "https://openwebui.com", baseUrl: process.env.BASE_URL || "/", ... diff --git a/docs/tutorials/web_search.md b/docs/tutorials/web_search.md index 931c8b4..32ac31a 100644 --- a/docs/tutorials/web_search.md +++ b/docs/tutorials/web_search.md @@ -348,7 +348,7 @@ docker exec -it open-webui curl http://host.docker.internal:8080/search?q=this+i 5. Adjust the `Search Result Count` and `Concurrent Requests` values accordingly 6. Save changes - + ## 5. Using Web Search in a Chat @@ -356,7 +356,7 @@ To access Web Search, Click on the + next to the message input field. Here you can toggle Web Search On/Off. - + By following these steps, you will have successfully set up SearXNG with Open WebUI, enabling you to perform web searches using the SearXNG engine. @@ -379,14 +379,14 @@ This is enabled on a per session basis eg. reloading the page, changing to anoth 7. Fill `Google PSE API Key` with the `API key` and `Google PSE Engine Id` (# 4) 8. Click `Save` - + #### Note You have to enable `Web search` in the prompt field, using plus (`+`) button. Search the web ;-) - + ## Brave API @@ -444,13 +444,13 @@ services: 6. [Optional] Enter the `SearchApi engine` name you want to query. Example, `google`, `bing`, `baidu`, `google_news`, `bing_news`, `google_videos`, `google_scholar` and `google_patents.` By default, it is set to `google`. 7. Click `Save`. - + #### Note You have to enable `Web search` in the prompt field, using plus (`+`) button to search the web using [SearchApi](https://www.searchapi.io/) engines. - + ## Kagi API diff --git a/docusaurus.config.ts b/docusaurus.config.ts index ad581fe..8efa918 100644 --- a/docusaurus.config.ts +++ b/docusaurus.config.ts @@ -6,7 +6,7 @@ import { themes as prismThemes } from "prism-react-renderer"; const config: Config = { title: "Open WebUI", tagline: "ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI)", - favicon: "img/favicon.png", + favicon: "images/favicon.png", // Set the production url of your site here url: "https://openwebui.com", @@ -65,12 +65,12 @@ const config: Config = { themeConfig: { // Replace with your project's social card - // image: "img/docusaurus-social-card.jpg", + // image: "images/docusaurus-social-card.jpg", navbar: { title: "Open WebUI", logo: { - src: "img/logo.png", - srcDark: "img/logo-dark.png", + src: "images/logo.png", + srcDark: "images/logo-dark.png", }, items: [ // { @@ -108,7 +108,7 @@ const config: Config = { }, footer: { logo: { - src: "img/logo-dark.png", + src: "images/logo-dark.png", height: 100, }, style: "light", diff --git a/static/img/demo.gif b/static/images/demo.gif similarity index 100% rename from static/img/demo.gif rename to static/images/demo.gif diff --git a/static/img/docusaurus-social-card.jpg b/static/images/docusaurus-social-card.jpg similarity index 100% rename from static/img/docusaurus-social-card.jpg rename to static/images/docusaurus-social-card.jpg diff --git a/static/img/docusaurus.png b/static/images/docusaurus.png similarity index 100% rename from static/img/docusaurus.png rename to static/images/docusaurus.png diff --git a/static/img/enable_web_search.png b/static/images/enable_web_search.png similarity index 100% rename from static/img/enable_web_search.png rename to static/images/enable_web_search.png diff --git a/static/img/evaluation/arena-many.png b/static/images/evaluation/arena-many.png similarity index 100% rename from static/img/evaluation/arena-many.png rename to static/images/evaluation/arena-many.png diff --git a/static/img/evaluation/arena.png b/static/images/evaluation/arena.png similarity index 100% rename from static/img/evaluation/arena.png rename to static/images/evaluation/arena.png diff --git a/static/img/evaluation/leaderboard-reranked.png b/static/images/evaluation/leaderboard-reranked.png similarity index 100% rename from static/img/evaluation/leaderboard-reranked.png rename to static/images/evaluation/leaderboard-reranked.png diff --git a/static/img/evaluation/leaderboard.png b/static/images/evaluation/leaderboard.png similarity index 100% rename from static/img/evaluation/leaderboard.png rename to static/images/evaluation/leaderboard.png diff --git a/static/img/evaluation/normal-many.png b/static/images/evaluation/normal-many.png similarity index 100% rename from static/img/evaluation/normal-many.png rename to static/images/evaluation/normal-many.png diff --git a/static/img/evaluation/normal.png b/static/images/evaluation/normal.png similarity index 100% rename from static/img/evaluation/normal.png rename to static/images/evaluation/normal.png diff --git a/static/img/evaluation/rate.png b/static/images/evaluation/rate.png similarity index 100% rename from static/img/evaluation/rate.png rename to static/images/evaluation/rate.png diff --git a/static/img/favicon.ico b/static/images/favicon.ico similarity index 100% rename from static/img/favicon.ico rename to static/images/favicon.ico diff --git a/static/img/favicon.png b/static/images/favicon.png similarity index 100% rename from static/img/favicon.png rename to static/images/favicon.png diff --git a/static/img/folder-demo.gif b/static/images/folder-demo.gif similarity index 100% rename from static/img/folder-demo.gif rename to static/images/folder-demo.gif diff --git a/static/img/getting-started/quick-start/manage-modal-ollama.png b/static/images/getting-started/quick-start/manage-modal-ollama.png similarity index 100% rename from static/img/getting-started/quick-start/manage-modal-ollama.png rename to static/images/getting-started/quick-start/manage-modal-ollama.png diff --git a/static/img/getting-started/quick-start/manage-ollama.png b/static/images/getting-started/quick-start/manage-ollama.png similarity index 100% rename from static/img/getting-started/quick-start/manage-ollama.png rename to static/images/getting-started/quick-start/manage-ollama.png diff --git a/static/img/getting-started/quick-start/selector-ollama.png b/static/images/getting-started/quick-start/selector-ollama.png similarity index 100% rename from static/img/getting-started/quick-start/selector-ollama.png rename to static/images/getting-started/quick-start/selector-ollama.png diff --git a/static/img/logo-dark.png b/static/images/logo-dark.png similarity index 100% rename from static/img/logo-dark.png rename to static/images/logo-dark.png diff --git a/static/img/logo.png b/static/images/logo.png similarity index 100% rename from static/img/logo.png rename to static/images/logo.png diff --git a/static/img/logo.svg b/static/images/logo.svg similarity index 100% rename from static/img/logo.svg rename to static/images/logo.svg diff --git a/static/img/migration_litellm_config.png b/static/images/migration_litellm_config.png similarity index 100% rename from static/img/migration_litellm_config.png rename to static/images/migration_litellm_config.png diff --git a/static/img/oi/cat.png b/static/images/oi/cat.png similarity index 100% rename from static/img/oi/cat.png rename to static/images/oi/cat.png diff --git a/static/img/pipelines/community-functions.png b/static/images/pipelines/community-functions.png similarity index 100% rename from static/img/pipelines/community-functions.png rename to static/images/pipelines/community-functions.png diff --git a/static/img/pipelines/filters.png b/static/images/pipelines/filters.png similarity index 100% rename from static/img/pipelines/filters.png rename to static/images/pipelines/filters.png diff --git a/static/img/pipelines/graph-viz-action.gif b/static/images/pipelines/graph-viz-action.gif similarity index 100% rename from static/img/pipelines/graph-viz-action.gif rename to static/images/pipelines/graph-viz-action.gif diff --git a/static/img/pipelines/header.png b/static/images/pipelines/header.png similarity index 100% rename from static/img/pipelines/header.png rename to static/images/pipelines/header.png diff --git a/static/img/pipelines/pipe-model-example.png b/static/images/pipelines/pipe-model-example.png similarity index 100% rename from static/img/pipelines/pipe-model-example.png rename to static/images/pipelines/pipe-model-example.png diff --git a/static/img/pipelines/pipes.png b/static/images/pipelines/pipes.png similarity index 100% rename from static/img/pipelines/pipes.png rename to static/images/pipelines/pipes.png diff --git a/static/img/pipelines/workflow.png b/static/images/pipelines/workflow.png similarity index 100% rename from static/img/pipelines/workflow.png rename to static/images/pipelines/workflow.png diff --git a/static/img/tag-demo.gif b/static/images/tag-demo.gif similarity index 100% rename from static/img/tag-demo.gif rename to static/images/tag-demo.gif diff --git a/static/img/tutorial_google_pse1.png b/static/images/tutorial_google_pse1.png similarity index 100% rename from static/img/tutorial_google_pse1.png rename to static/images/tutorial_google_pse1.png diff --git a/static/img/tutorial_google_pse2.png b/static/images/tutorial_google_pse2.png similarity index 100% rename from static/img/tutorial_google_pse2.png rename to static/images/tutorial_google_pse2.png diff --git a/static/img/tutorial_image_generation.png b/static/images/tutorial_image_generation.png similarity index 100% rename from static/img/tutorial_image_generation.png rename to static/images/tutorial_image_generation.png diff --git a/static/img/tutorial_langfuse.png b/static/images/tutorial_langfuse.png similarity index 100% rename from static/img/tutorial_langfuse.png rename to static/images/tutorial_langfuse.png diff --git a/static/img/tutorial_litellm_gemini.png b/static/images/tutorial_litellm_gemini.png similarity index 100% rename from static/img/tutorial_litellm_gemini.png rename to static/images/tutorial_litellm_gemini.png diff --git a/static/img/tutorial_litellm_ollama.png b/static/images/tutorial_litellm_ollama.png similarity index 100% rename from static/img/tutorial_litellm_ollama.png rename to static/images/tutorial_litellm_ollama.png diff --git a/static/img/tutorial_model_filter.png b/static/images/tutorial_model_filter.png similarity index 100% rename from static/img/tutorial_model_filter.png rename to static/images/tutorial_model_filter.png diff --git a/static/img/tutorial_searchapi_search.png b/static/images/tutorial_searchapi_search.png similarity index 100% rename from static/img/tutorial_searchapi_search.png rename to static/images/tutorial_searchapi_search.png diff --git a/static/img/tutorial_searxng_config.png b/static/images/tutorial_searxng_config.png similarity index 100% rename from static/img/tutorial_searxng_config.png rename to static/images/tutorial_searxng_config.png diff --git a/static/images/tutorials/deepseek/connection.png b/static/images/tutorials/deepseek/connection.png new file mode 100644 index 0000000..9b4490c Binary files /dev/null and b/static/images/tutorials/deepseek/connection.png differ diff --git a/static/images/tutorials/deepseek/response.png b/static/images/tutorials/deepseek/response.png new file mode 100644 index 0000000..fd1c105 Binary files /dev/null and b/static/images/tutorials/deepseek/response.png differ diff --git a/static/images/tutorials/deepseek/serve.png b/static/images/tutorials/deepseek/serve.png new file mode 100644 index 0000000..d8c12dd Binary files /dev/null and b/static/images/tutorials/deepseek/serve.png differ diff --git a/static/img/undraw_docusaurus_mountain.svg b/static/images/undraw_docusaurus_mountain.svg similarity index 100% rename from static/img/undraw_docusaurus_mountain.svg rename to static/images/undraw_docusaurus_mountain.svg diff --git a/static/img/undraw_docusaurus_react.svg b/static/images/undraw_docusaurus_react.svg similarity index 100% rename from static/img/undraw_docusaurus_react.svg rename to static/images/undraw_docusaurus_react.svg diff --git a/static/img/undraw_docusaurus_tree.svg b/static/images/undraw_docusaurus_tree.svg similarity index 100% rename from static/img/undraw_docusaurus_tree.svg rename to static/images/undraw_docusaurus_tree.svg diff --git a/static/img/web_search_toggle.png b/static/images/web_search_toggle.png similarity index 100% rename from static/img/web_search_toggle.png rename to static/images/web_search_toggle.png