docs: update SGLang usage

This commit is contained in:
zhyncs 2024-12-30 14:13:27 +08:00
parent 83dd18eda4
commit 8638950ec2

View File

@ -227,7 +227,7 @@ We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.c
DeepSeek-V3 can be deployed locally using the following hardware and open-source community software: DeepSeek-V3 can be deployed locally using the following hardware and open-source community software:
1. **DeepSeek-Infer Demo**: We provide a simple and lightweight demo for FP8 and BF16 inference. 1. **DeepSeek-Infer Demo**: We provide a simple and lightweight demo for FP8 and BF16 inference.
2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes. 2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes, with Multi-Token Prediction coming soon.
3. **LMDeploy**: Enables efficient FP8 and BF16 inference for local and cloud deployment. 3. **LMDeploy**: Enables efficient FP8 and BF16 inference for local and cloud deployment.
4. **TensorRT-LLM**: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon. 4. **TensorRT-LLM**: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon.
5. **vLLM**: Support DeekSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism. 5. **vLLM**: Support DeekSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism.
@ -292,6 +292,8 @@ torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-ad
Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4.1) fully supports running DeepSeek-V3 on both **NVIDIA and AMD GPUs**, making it a highly versatile and robust solution. Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4.1) fully supports running DeepSeek-V3 on both **NVIDIA and AMD GPUs**, making it a highly versatile and robust solution.
SGLang also supports [multi-node tensor parallelism](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-2-h208), enabling you to run this model on multiple network-connected machines.
Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3 Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3
### 6.3 Inference with LMDeploy (recommended) ### 6.3 Inference with LMDeploy (recommended)