From 5829859e1efbfc0504ed2fb89f83fdb7e80dad02 Mon Sep 17 00:00:00 2001 From: Timothy Jaeryang Baek Date: Fri, 31 Jan 2025 01:50:18 -0800 Subject: [PATCH] Update deepseekr1-dynamic.md --- docs/tutorials/integrations/deepseekr1-dynamic.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/integrations/deepseekr1-dynamic.md b/docs/tutorials/integrations/deepseekr1-dynamic.md index 6dc40b9..a0fb7f8 100644 --- a/docs/tutorials/integrations/deepseekr1-dynamic.md +++ b/docs/tutorials/integrations/deepseekr1-dynamic.md @@ -100,7 +100,7 @@ Here’s the command to start the server: - **`--n-gpu-layers`:** Set the number of layers you want to offload to your GPU for faster inference. The exact number depends on your GPU’s memory capacity — reference Unsloth’s table for specific recommendations. ::: -For example, if your model was downloaded to `/Users/tim/Documents/workspace` and you have an RTX 4090 GPU with 24GB VRAM, your command would look like this: +For example, if your model was downloaded to `/Users/tim/Documents/workspace`, your command would look like this: ```bash ./llama-server \ --model /Users/tim/Documents/workspace/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \