diff --git a/docs/getting-started/quick-start/starting-with-openai-compatible.mdx b/docs/getting-started/quick-start/starting-with-openai-compatible.mdx index aba3f79..5710634 100644 --- a/docs/getting-started/quick-start/starting-with-openai-compatible.mdx +++ b/docs/getting-started/quick-start/starting-with-openai-compatible.mdx @@ -20,7 +20,7 @@ There are many servers and tools that expose an OpenAI-compatible API. Here are - [Llama.cpp](https://github.com/ggml-org/llama.cpp): Extremely efficient, runs on CPU and GPU - [Ollama](https://ollama.com/): Super user-friendly and cross-platform - [LM Studio](https://lmstudio.ai/): Rich desktop app for Windows/Mac/Linux -- [Lemonade (ONNX TurnkeyML)](https://github.com/onnx/turnkeyml): Fast ONNX-based backend, easily runs GGUF and other models locally +- [Lemonade (ONNX TurnkeyML)](https://github.com/onnx/turnkeyml): Fast ONNX-based backend with NPU/iGPU acceleration Pick whichever suits your workflow! @@ -41,8 +41,7 @@ Lemonade is a plug-and-play ONNX-based OpenAI-compatible server. Here’s how to ![Lemonade Server](/images/getting-started/lemonade-server.png) - -Lemonade works on Linux and Mac too—see [their docs](https://github.com/onnx/turnkeyml) for details. +See [their docs](https://github.com/onnx/turnkeyml) for details. ---