diff --git a/docs/troubleshooting/rag.mdx b/docs/troubleshooting/rag.mdx index 0090076..4602698 100644 --- a/docs/troubleshooting/rag.mdx +++ b/docs/troubleshooting/rag.mdx @@ -78,6 +78,25 @@ Bad embeddings = bad retrieval. If the vector representation of your content is --- +### 5. ❌ 400: 'NoneType' object has no attribute 'encode' + +This error indicates a misconfigured or missing embedding model. When Open WebUI tries to create embeddings but doesn’t have a valid model loaded, it can’t process the text—and the result is this cryptic error. + +💥 Cause: +- Your embedding model isn’t set up properly. +- It might not have downloaded completely. +- Or if you're using an external embedding model, it may not be accessible. + +✅ Solution: + +- Go to: **Admin Settings > Documents > Embedding Model** +- Save the embedding model again—even if it's already selected. This forces a recheck/download. +- If you're using a remote/external embedding tool, make sure it's running and accessible to Open WebUI. + +📌 Tip: After fixing the configuration, try re-embedding a document and verify no error is shown in the logs. + +--- + ## 🧪 Pro Tip: Test with GPT-4o or GPT-4 If you’re not sure whether the issue is with retrieval, token limits, or embedding—try using GPT-4o temporarily (e.g., via OpenAI API). If the results suddenly become more accurate, it's a strong signal that your local model’s context limit (2048 by default in Ollama) is the bottleneck.