From b637d412e27bb66b2d2083fae5fda45589ebeb24 Mon Sep 17 00:00:00 2001 From: Yuwen Hu Date: Fri, 31 May 2024 16:23:30 +0800 Subject: [PATCH 1/4] Add tutorial draft: Local LLM Setup with IPEX-LLM on Intel GPU --- docs/tutorial/ipex_llm.md | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 docs/tutorial/ipex_llm.md diff --git a/docs/tutorial/ipex_llm.md b/docs/tutorial/ipex_llm.md new file mode 100644 index 0000000..d3ee9a9 --- /dev/null +++ b/docs/tutorial/ipex_llm.md @@ -0,0 +1,24 @@ +--- +sidebar_position: 10 +title: "Local LLM Setup with IPEX-LLM on Intel GPU" +--- + +:::note +This guide is verified with Open WebUI setup through [Mannual Installation](../getting-started/index.mdx#manual-installation). +::: + +# Local LLM Setup with IPEX-LLM on Intel GPU + +:::info +[**IPEX-LLM**](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc A-Series, Flex and Max) with very low latency. +::: + +This tutorial demonstrates how to setup Open WebUI with **IPEX-LLM accelerated Ollama backend hosted on Intel GPU**. By following this guide, you will be able to setup Open WebUI even on a low-cost PC (i.e. only with integrated GPU), and achieve a smooth experience. + +## Start Ollama Serve on Intel GPU + +Refer to [this guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/ollama_quickstart.html) from IPEX-LLM official documentation about how to install and run Ollama serve accelerated by IPEX-LLM on Intel GPU. + +:::tip +If you would like to reach the Ollama serve from another machine, make sure you set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`. +::: From 0b0c908831ef51d409f466dd3327cf2010504a01 Mon Sep 17 00:00:00 2001 From: Yuwen Hu Date: Fri, 31 May 2024 16:34:13 +0800 Subject: [PATCH 2/4] Update for configuration --- docs/tutorial/ipex_llm.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/docs/tutorial/ipex_llm.md b/docs/tutorial/ipex_llm.md index d3ee9a9..2b2e9fc 100644 --- a/docs/tutorial/ipex_llm.md +++ b/docs/tutorial/ipex_llm.md @@ -22,3 +22,17 @@ Refer to [this guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quicksta :::tip If you would like to reach the Ollama serve from another machine, make sure you set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`. ::: + +## Configure Open WebUI + +Access the Ollama settings through **Settings -> Connections** in the menu. By default, the **Ollama Base URL** is preset to https://localhost:11434, as illustrated in the snapshot below. To verify the status of the Ollama service connection, click the **Refresh button** located next to the textbox. If the WebUI is unable to establish a connection with the Ollama server, you will see an error message stating, `WebUI could not connect to Ollama`. + +![Open WebUI Ollama Setting Failure](https://llm-assets.readthedocs.io/en/latest/_images/open_webui_settings_0.png) + +If the connection is successful, you will see a message stating `Service Connection Verified`, as illustrated below. + +![Open WebUI Ollama Setting Success](https://llm-assets.readthedocs.io/en/latest/_images/open_webui_settings.png) + +:::tip +If you want to use an Ollama server hosted at a different URL, simply update the **Ollama Base URL** to the new URL and press the **Refresh** button to re-confirm the connection to Ollama. +::: \ No newline at end of file From 04ef5b5e615625734d2ef777fc14fa619cbca5bc Mon Sep 17 00:00:00 2001 From: Yuwen Hu Date: Fri, 31 May 2024 18:13:03 +0800 Subject: [PATCH 3/4] typo fix --- docs/tutorial/ipex_llm.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tutorial/ipex_llm.md b/docs/tutorial/ipex_llm.md index 2b2e9fc..7138293 100644 --- a/docs/tutorial/ipex_llm.md +++ b/docs/tutorial/ipex_llm.md @@ -4,7 +4,7 @@ title: "Local LLM Setup with IPEX-LLM on Intel GPU" --- :::note -This guide is verified with Open WebUI setup through [Mannual Installation](../getting-started/index.mdx#manual-installation). +This guide is verified with Open WebUI setup through [Manual Installation](../getting-started/index.mdx#manual-installation). ::: # Local LLM Setup with IPEX-LLM on Intel GPU @@ -13,7 +13,7 @@ This guide is verified with Open WebUI setup through [Mannual Installation](../g [**IPEX-LLM**](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc A-Series, Flex and Max) with very low latency. ::: -This tutorial demonstrates how to setup Open WebUI with **IPEX-LLM accelerated Ollama backend hosted on Intel GPU**. By following this guide, you will be able to setup Open WebUI even on a low-cost PC (i.e. only with integrated GPU), and achieve a smooth experience. +This tutorial demonstrates how to setup Open WebUI with **IPEX-LLM accelerated Ollama backend hosted on Intel GPU**. By following this guide, you will be able to setup Open WebUI even on a low-cost PC (i.e. only with integrated GPU) with a smooth experience. ## Start Ollama Serve on Intel GPU From b4191d93e3b8fd1b979fabc11f20a220b30cde2d Mon Sep 17 00:00:00 2001 From: Yuwen Hu Date: Mon, 3 Jun 2024 18:04:43 +0800 Subject: [PATCH 4/4] Small typo fixes --- docs/tutorial/ipex_llm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorial/ipex_llm.md b/docs/tutorial/ipex_llm.md index 7138293..ea1196b 100644 --- a/docs/tutorial/ipex_llm.md +++ b/docs/tutorial/ipex_llm.md @@ -20,7 +20,7 @@ This tutorial demonstrates how to setup Open WebUI with **IPEX-LLM accelerated O Refer to [this guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/ollama_quickstart.html) from IPEX-LLM official documentation about how to install and run Ollama serve accelerated by IPEX-LLM on Intel GPU. :::tip -If you would like to reach the Ollama serve from another machine, make sure you set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`. +If you would like to reach the Ollama service from another machine, make sure you set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`. ::: ## Configure Open WebUI