Shangyan Zhou
65e2a700f0
Merge pull request #135 from deepseek-ai/add-iw-fork
...
Add Infrawaves' fork to README.
2025-04-27 10:51:18 +08:00
Shangyan Zhou
1a0c8f6425
Add Infrawaves' fork to README.
2025-04-27 10:37:30 +08:00
Chenggang Zhao
007fcfcf97
Merge pull request #130 from deepseek-ai/trmt/internode_multi_qp
...
Support multi-QP for normal kernels
2025-04-22 13:04:42 +08:00
Shangyan Zhou
e255d57bef
Use put_nbi_warp
.
2025-04-22 12:29:46 +08:00
Shangyan Zhou
3b1045db43
Fix the performance data.
2025-04-22 11:23:42 +08:00
Chenggang Zhao
edbb1bc3ff
Several code lints
2025-04-22 10:52:10 +08:00
Shangyan Zhou
3e54b78fd7
Normal kernels always use IBGDA mode.
2025-04-22 10:36:24 +08:00
Shangyan Zhou
20b2aaaf9e
Refactor some code.
2025-04-22 10:22:30 +08:00
moningchen
c07fdd197c
Merge branch 'trmt/internode_multi_qp' of github.com:deepseek-ai/DeepEP into trmt/internode_multi_qp
2025-04-21 21:31:49 +08:00
moningchen
e0eaaf94fb
Add the performance data after internode optimization in the Readme file
2025-04-21 21:30:08 +08:00
Shangyan Zhou
e2c578485c
Revert ibgda_device.cuh
and remove some comments.
2025-04-21 17:44:32 +08:00
moningchen
5ab80c28f3
In the Internode Normal Kernel, when using nvshmem ibrc for RDMA data transmission, a single QP is used for data transfer between two GPUs, which limits kernel performance in network card dual-port and RoCE network scenarios.
...
In our optimized Internode Normal Kernel, we implemented multiple QPs for data transmission between two GPUs, setting a different QP for each channel. Additionally, we modified the transmission method from IBRC to IBGAD.
Through these optimizations, the Internode Normal Kernel achieves optimal performance in both H800 and H20 environments, with RDMA transmission performance nearly reaching the physical network performance limit. Using the current default statistical method, in 4-node H800 and H20 environments, RDMA performance can reach 60GB/s+.
2025-04-21 15:50:39 +08:00
Shangyan Zhou
a84a24808f
Merge pull request #124 from wplf/patch-1
...
Fix typo in nvshmem.patch
2025-04-16 10:57:31 +08:00
李金梁
a2ccc95d78
Fix typo in nvshmem.patch
2025-04-16 10:30:38 +08:00
Chenggang Zhao
a0c69317ab
Merge pull request #118 from andylin-hao/main
...
Fix test combine args
2025-04-14 15:51:30 +08:00
Shangyan Zhou
b9bb2bbaf6
Merge pull request #119 from phantom5125/patch-1
...
Fix typo in nvshmem.patch
2025-04-14 09:29:46 +08:00
GreatHato
42f617088f
Fix typo in nvshmem.patch
2025-04-13 00:14:44 +08:00
Hao Lin
23c54150ba
Fix test combine args
...
Signed-off-by: Hao Lin <linhaomails@gmail.com>
2025-04-11 18:21:09 +08:00
Chenggang Zhao
8a0ca8e2ec
Merge pull request #116 from alpha-baby/fix-test-result-not-output
...
fix: not output result in some linux system
2025-04-11 13:23:37 +08:00
fujianhao.fjh
0f80da8458
fix: not output result in some linux system
2025-04-10 18:18:30 +08:00
Chenggang Zhao
42494864ba
Remove useless control metadata for low-latency combine
2025-04-07 09:55:39 +08:00
Chenggang Zhao
2a0b3d7a5d
Merge pull request #108 from fzyzcjy/patch-2
...
Super tiny fix shape typo
2025-04-03 12:18:28 +08:00
fzyzcjy
218c5a1f96
Update buffer.py
2025-04-03 10:57:45 +08:00
Chenggang Zhao
26fa72d80f
Fix zero-copy mode tests
2025-03-28 16:49:33 +08:00
Chenggang Zhao
c4d12b4f8f
Fix compilation
2025-03-28 16:45:10 +08:00
Chenggang Zhao
dcf46f1c26
Merge pull request #96 from songhexiang/adjust_kNumThreads_of_notify_dispatch
...
Adjust kNumThreads of notify_dispatch
2025-03-28 16:42:21 +08:00
songhexiang
4dd1e68ac8
For the SMs which calculate metadata in notify_dispatch, each warp in the SM is used to calculate the metadata of one channel. The default configuration is 8 warps for 10 channels, which needs two rounds of loop. Maybe the number of warps can be configured to the number of the channels so that one loop is enough.
2025-03-28 06:43:29 +00:00
Chenggang Zhao
e130cc6e7d
Remove NVLink low-latency plan
2025-03-27 17:15:01 +08:00
Chenggang Zhao
cbd92fd0fc
Update README
2025-03-27 15:57:59 +08:00
Chenggang Zhao
ffc39ba084
Stronger acquire scope for low-latency kernels
2025-03-27 09:30:36 +08:00
Chenggang Zhao
7d52ad7248
Merge pull request #89 from fzyzcjy/patch-1
...
Super tiny fix typo
2025-03-25 09:28:44 +08:00
Chenggang Zhao
ae0eafd2be
Remove confusing comments
2025-03-25 09:27:34 +08:00
fzyzcjy
36b5c27993
Update buffer.py
2025-03-25 09:12:36 +08:00
Chenggang Zhao
c4b8ffc37c
Merge pull request #79 from deepseek-ai/zero-copy-combine
...
Support zero-copy for low-latency combine
2025-03-18 15:46:45 +08:00
Chenggang Zhao
66465476ae
Support zero-copy for low-latency combine
2025-03-18 15:44:26 +08:00
Chenggang Zhao
dcaf73e5ff
Support zero-copy for low-latency combine
2025-03-18 15:41:50 +08:00
Chenggang Zhao
82dcf48fd3
Fix bugs for intranode EP kernels
2025-03-14 16:09:23 +08:00
Chenggang Zhao
043fa5fa99
Merge pull request #73 from deepseek-ai/p2p-signal
...
Low latency kernels use rdma atomic to support AR
2025-03-14 11:55:17 +08:00
Shangyan Zhou
38cdaf390c
Fix style.
2025-03-14 11:22:00 +08:00
Shangyan Zhou
2d0cf41dd1
Low latency kernels use rdma atomic to support AR.
2025-03-14 11:04:57 +08:00
Chenggang Zhao
7128ba3e39
Merge pull request #66 from dzhulgakov/combine-out-arg
...
Allow passing output tensor in low_latency_combine
2025-03-13 09:18:06 +08:00
Dmytro Dzhulgakov
50ac280ae7
comments
2025-03-13 00:42:08 +00:00
Chenggang Zhao
0008c6755e
Merge pull request #67 from deepseek-ai/roce-support
...
Update NVSHMEM to v3.2.5.
2025-03-11 09:30:45 +08:00
Dmytro Dzhulgakov
b3b61ef5ef
Allow passing output tensor in low_latency_combine
2025-03-10 22:19:21 +00:00
Chenggang Zhao
ed7487c15e
Support BF16 for low-latency kernels
2025-03-10 17:24:41 +08:00
Chenggang Zhao
1fc40d50f3
Improve AR performance
2025-03-06 21:41:19 +08:00
Chenggang Zhao
41385ba5b3
Merge pull request #45 from deepseek-ai/ar-support
...
Fix AR bugs for normal kernels
2025-03-06 09:48:17 +08:00
Chenggang Zhao
458cdcb22a
Fix AR bugs for normal kernels
2025-03-05 17:13:35 +08:00
Shangyan Zhou
e995aa22db
Update NVSHMEM to v3.2.5.
2025-03-05 16:16:52 +08:00
Chenggang Zhao
680e424bdc
Bugs fixed
2025-03-05 14:27:45 +08:00