DeepEP/csrc/kernels
moningchen 5ab80c28f3 In the Internode Normal Kernel, when using nvshmem ibrc for RDMA data transmission, a single QP is used for data transfer between two GPUs, which limits kernel performance in network card dual-port and RoCE network scenarios.
In our optimized Internode Normal Kernel, we implemented multiple QPs for data transmission between two GPUs, setting a different QP for each channel. Additionally, we modified the transmission method from IBRC to IBGAD.

Through these optimizations, the Internode Normal Kernel achieves optimal performance in both H800 and H20 environments, with RDMA transmission performance nearly reaching the physical network performance limit. Using the current default statistical method, in 4-node H800 and H20 environments, RDMA performance can reach 60GB/s+.
2025-04-21 15:50:39 +08:00
..
api.cuh Support zero-copy for low-latency combine 2025-03-18 15:41:50 +08:00
buffer.cuh Initial commit 2025-02-25 09:07:53 +08:00
CMakeLists.txt Initial commit 2025-02-25 09:07:53 +08:00
configs.cuh Initial commit 2025-02-25 09:07:53 +08:00
exception.cuh Initial commit 2025-02-25 09:07:53 +08:00
ibgda_device.cuh In the Internode Normal Kernel, when using nvshmem ibrc for RDMA data transmission, a single QP is used for data transfer between two GPUs, which limits kernel performance in network card dual-port and RoCE network scenarios. 2025-04-21 15:50:39 +08:00
internode_ll.cu Remove useless control metadata for low-latency combine 2025-04-07 09:55:39 +08:00
internode.cu In the Internode Normal Kernel, when using nvshmem ibrc for RDMA data transmission, a single QP is used for data transfer between two GPUs, which limits kernel performance in network card dual-port and RoCE network scenarios. 2025-04-21 15:50:39 +08:00
intranode.cu Fix bugs for intranode EP kernels 2025-03-14 16:09:23 +08:00
launch.cuh Initial commit 2025-02-25 09:07:53 +08:00
runtime.cu In the Internode Normal Kernel, when using nvshmem ibrc for RDMA data transmission, a single QP is used for data transfer between two GPUs, which limits kernel performance in network card dual-port and RoCE network scenarios. 2025-04-21 15:50:39 +08:00
utils.cuh Update some comments and docs 2025-02-27 10:27:22 +08:00