Commit Graph

81 Commits

Author SHA1 Message Date
Chenggang Zhao
a9967bc27c Update README 2025-04-09 11:14:45 +08:00
Chenggang Zhao
5a80e4bb96 Fix indent x2 2025-04-09 11:00:10 +08:00
Chenggang Zhao
bdca8b0624 Fix indent 2025-04-09 10:59:07 +08:00
Chenggang Zhao
4c0cc290c7 Refactor M repetition with loops 2025-04-09 10:50:44 +08:00
Chenggang Zhao
a6524d411a Larger block N candidates 2025-04-09 10:11:43 +08:00
Chenggang Zhao
48a5f071be Clean up config heuristics 2025-04-09 10:01:15 +08:00
Chenggang Zhao
ce65d5e33c Remove unused x256 WGMMA 2025-04-09 09:32:46 +08:00
sazc
97575bf1c6 Performance: BlockTile 256x128 optimizations enable 1500+ TFLOPS FP8 performance on the H800-SXM platform 2025-04-08 17:42:23 +08:00
Chenggang Zhao
b4ecf9c3ff Fix TMA multicast bugs 2025-04-07 14:34:42 +08:00
Chenggang Zhao
bff5724ded Code format 2025-04-07 09:32:43 +08:00
Chenggang Zhao
3ea3cb203c
Merge pull request #80 from abcdabcd987/fix-link-error
Fix linking error from ODR violation
2025-04-07 09:31:58 +08:00
Chenggang Zhao
b0868c9014
Merge pull request #79 from yizhang2077/lru-cache-opt
Add lru-cache for get_best_configs to avoid repeated calculation
2025-04-07 09:31:30 +08:00
Lequn Chen
611e3f659d Fix linking error from ODR violation 2025-04-05 17:35:23 +00:00
Yi Zhang
776bd0cccc
add lru-cache to avoid repeated calculation 2025-04-04 12:44:26 +08:00
Chenggang Zhao
c187c23ba8
Merge pull request #78 from deepseek-ai/tma-3d-padding
Solving bank conflict via padding and TMA 3D store
2025-04-03 16:06:10 +08:00
Chenggang Zhao
d14962f072 Add DG_NVCC_OVERRIDE_CPP_STANDARD 2025-04-03 15:53:29 +08:00
Chenggang Zhao
3a5539b7db Use c++20 2025-04-03 15:47:59 +08:00
Chenggang Zhao
6db7e1863b Solve STSM bank conflict via padding and 3D TMA 2025-04-03 15:39:35 +08:00
Liang
c57699ac93
Merge pull request #76 from YLGH/patch-1
Update nvcc flag c++20
2025-03-26 09:52:47 +08:00
YLGH
b7db15ce94
Update nvcc flag c++20
Needed for fconcepts
2025-03-25 14:15:39 -07:00
Chenggang Zhao
8002b769c0 Update README 2025-03-25 18:13:24 +08:00
Chenggang Zhao
a5645d7afa
Merge pull request #74 from deepseek-ai/larger-block
Performance: Larger BlockTile optimizations enable 1470+ TF FP8 on the "H800"-SXM
2025-03-25 18:07:33 +08:00
Chenggang Zhao
55ab91f72f Update performance 2025-03-25 18:06:47 +08:00
Chenggang Zhao
09d097f84d Add some notes 2025-03-25 17:41:49 +08:00
Chenggang Zhao
25db8de345 Better performance 2025-03-25 17:34:06 +08:00
Chenggang Zhao
1999d553e5 Lower TMA requirement 2025-03-25 17:18:53 +08:00
Chenggang Zhao
ddccb230ca Fix NVCC branch divergence 2025-03-25 17:12:51 +08:00
Chenggang Zhao
9c4f6f53f5 Optimize compilation speed 2025-03-25 16:51:21 +08:00
Chenggang Zhao
612dd57001 Simplify code 2025-03-25 16:45:20 +08:00
Chenggang Zhao
046fab64b7 Fix grouped GEMM cases 2025-03-25 16:41:44 +08:00
Chenggang Zhao
7768319ffe Remove unaligned predicates 2025-03-25 16:32:40 +08:00
Chenggang Zhao
3497428a5e Minor fix 2025-03-25 15:16:26 +08:00
Chenggang Zhao
7ffb118e54 Support multicasting on B 2025-03-25 14:56:42 +08:00
Chenggang Zhao
742fb1c8a5 Compilation-time GCD 2025-03-25 13:41:28 +08:00
Chenggang Zhao
b922e64cb2 Support block size 160 2025-03-25 13:37:59 +08:00
sazc
46eb0d08fb Performance: Larger BlockTile optimizations enable 1470+ TFLOPS FP8 performance on the H800-SXM platform 2025-03-25 10:44:57 +08:00
Liang
3b3783d06c
Merge pull request #68 from ademeure/flush_l2_pr
Correctly flush L2 (+performance impact & upcoming optimization fork)
2025-03-16 09:16:34 +08:00
ademeure
6cbff5778f Correctly flush L2, as reconstructing the tensors on every iteration effectively put them in the L2, and gave the GPU enough idle time to avoid thermal throttling in a potentially unrealistic way.
The previous behaviour is potentially representative of some use cases (e.g. previous kernel filling L2 with the data in a very specific way) but not standard benchmarking practice.
2025-03-15 20:46:24 +00:00
Liang
e1c070fbef
Merge pull request #65 from Z-NAVY/main
Fix get_col_major_tma_aligned_tensor to handle 2-dimensional inputs
2025-03-14 13:50:08 +08:00
Liang
4377c4dc57
Merge pull request #63 from fzyzcjy/patch-2
Super tiny fix typo
2025-03-14 10:27:48 +08:00
z-navy
3f92607b98 Fix get_col_major_tma_aligned_tensor to handle 2-dimensional inputs 2025-03-13 22:15:16 +08:00
fzyzcjy
e7fff7ef0a
Update m_grouped_gemm.py 2025-03-13 22:09:15 +08:00
Chenggang Zhao
bd2a775528 Code format 2025-03-11 13:26:10 +08:00
Chenggang Zhao
5233bad1e9
Merge pull request #55 from sleepcoo/fix-cudagraph
fix cuda_graph rng check error
2025-03-11 13:25:35 +08:00
sleepcoo
723a00338e fix cuda_graph rng check error 2025-03-11 12:40:42 +08:00
Chenggang Zhao
5e4badc577 Fix type lint 2025-03-10 13:10:16 +08:00
Chenggang Zhao
ba1e93a5c7
Merge pull request #44 from sazczmh/main
Performance: Configuration algorithms tuned to minimize the impact of tail effects, now up to 1402 TFLOPS
2025-03-10 13:08:03 +08:00
sazc
bed67b234c Minor fix 2025-03-10 13:02:02 +08:00
sazc
ed278eddd3 formats: Optimize get_best_configs implementation 2025-03-10 12:56:14 +08:00
sazc
50cf26cc7c Performance: Configuration algorithms tuned to minimize the impact of tail effects, now up to 1402 TFLOPS 2025-03-10 11:45:05 +08:00