From 8638950ec21665883e9c1c899cf863beca4d7ef6 Mon Sep 17 00:00:00 2001 From: zhyncs Date: Mon, 30 Dec 2024 14:13:27 +0800 Subject: [PATCH 1/4] docs: update SGLang usage --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index dbf88ba..937d39b 100644 --- a/README.md +++ b/README.md @@ -227,7 +227,7 @@ We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.c DeepSeek-V3 can be deployed locally using the following hardware and open-source community software: 1. **DeepSeek-Infer Demo**: We provide a simple and lightweight demo for FP8 and BF16 inference. -2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes. +2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes, with Multi-Token Prediction coming soon. 3. **LMDeploy**: Enables efficient FP8 and BF16 inference for local and cloud deployment. 4. **TensorRT-LLM**: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon. 5. **vLLM**: Support DeekSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism. @@ -292,6 +292,8 @@ torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-ad Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4.1) fully supports running DeepSeek-V3 on both **NVIDIA and AMD GPUs**, making it a highly versatile and robust solution. +SGLang also supports [multi-node tensor parallelism](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-2-h208), enabling you to run this model on multiple network-connected machines. + Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3 ### 6.3 Inference with LMDeploy (recommended) From a1edf4138eb944fe303088272ae0242d90241356 Mon Sep 17 00:00:00 2001 From: zhyncs Date: Mon, 30 Dec 2024 14:18:00 +0800 Subject: [PATCH 2/4] upd --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 937d39b..9993633 100644 --- a/README.md +++ b/README.md @@ -227,7 +227,7 @@ We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.c DeepSeek-V3 can be deployed locally using the following hardware and open-source community software: 1. **DeepSeek-Infer Demo**: We provide a simple and lightweight demo for FP8 and BF16 inference. -2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes, with Multi-Token Prediction coming soon. +2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes, with Multi-Token Prediction [coming soon](https://github.com/sgl-project/sglang/issues/2591). 3. **LMDeploy**: Enables efficient FP8 and BF16 inference for local and cloud deployment. 4. **TensorRT-LLM**: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon. 5. **vLLM**: Support DeekSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism. From 2fc98d1cdfc393b08360ed793df653c1e4b6a6f0 Mon Sep 17 00:00:00 2001 From: zhyncs Date: Mon, 30 Dec 2024 14:21:00 +0800 Subject: [PATCH 3/4] upd --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 9993633..4fee2fd 100644 --- a/README.md +++ b/README.md @@ -294,6 +294,8 @@ Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4 SGLang also supports [multi-node tensor parallelism](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-2-h208), enabling you to run this model on multiple network-connected machines. +Multi-Token Prediction (MTP) is in development, and progress can be tracked in the [optimization plan](https://github.com/sgl-project/sglang/issues/2591). + Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3 ### 6.3 Inference with LMDeploy (recommended) From 68d0061937df406804238be2621c6abeefccf82b Mon Sep 17 00:00:00 2001 From: zhyncs Date: Mon, 30 Dec 2024 14:25:28 +0800 Subject: [PATCH 4/4] upd --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4fee2fd..1b8a59c 100644 --- a/README.md +++ b/README.md @@ -288,7 +288,7 @@ torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-ad ### 6.2 Inference with SGLang (recommended) -[SGLang](https://github.com/sgl-project/sglang) currently supports MLA optimizations, FP8 (W8A8), FP8 KV Cache, and Torch Compile, delivering state-of-the-art latency and throughput performance among open-source frameworks. +[SGLang](https://github.com/sgl-project/sglang) currently supports [MLA optimizations](https://lmsys.org/blog/2024-09-04-sglang-v0-3/#deepseek-multi-head-latent-attention-mla-throughput-optimizations), [DP Attention](https://lmsys.org/blog/2024-12-04-sglang-v0-4/#data-parallelism-attention-for-deepseek-models), FP8 (W8A8), FP8 KV Cache, and Torch Compile, delivering state-of-the-art latency and throughput performance among open-source frameworks. Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4.1) fully supports running DeepSeek-V3 on both **NVIDIA and AMD GPUs**, making it a highly versatile and robust solution.