Commit Graph

41 Commits

Author SHA1 Message Date
Chenggang Zhao
3a5539b7db Use c++20 2025-04-03 15:47:59 +08:00
Chenggang Zhao
6db7e1863b Solve STSM bank conflict via padding and 3D TMA 2025-04-03 15:39:35 +08:00
YLGH
b7db15ce94
Update nvcc flag c++20
Needed for fconcepts
2025-03-25 14:15:39 -07:00
Chenggang Zhao
09d097f84d Add some notes 2025-03-25 17:41:49 +08:00
Chenggang Zhao
25db8de345 Better performance 2025-03-25 17:34:06 +08:00
Chenggang Zhao
1999d553e5 Lower TMA requirement 2025-03-25 17:18:53 +08:00
Chenggang Zhao
ddccb230ca Fix NVCC branch divergence 2025-03-25 17:12:51 +08:00
Chenggang Zhao
9c4f6f53f5 Optimize compilation speed 2025-03-25 16:51:21 +08:00
Chenggang Zhao
612dd57001 Simplify code 2025-03-25 16:45:20 +08:00
Chenggang Zhao
046fab64b7 Fix grouped GEMM cases 2025-03-25 16:41:44 +08:00
Chenggang Zhao
7768319ffe Remove unaligned predicates 2025-03-25 16:32:40 +08:00
Chenggang Zhao
3497428a5e Minor fix 2025-03-25 15:16:26 +08:00
Chenggang Zhao
7ffb118e54 Support multicasting on B 2025-03-25 14:56:42 +08:00
Chenggang Zhao
742fb1c8a5 Compilation-time GCD 2025-03-25 13:41:28 +08:00
Chenggang Zhao
b922e64cb2 Support block size 160 2025-03-25 13:37:59 +08:00
sazc
46eb0d08fb Performance: Larger BlockTile optimizations enable 1470+ TFLOPS FP8 performance on the H800-SXM platform 2025-03-25 10:44:57 +08:00
ademeure
6cbff5778f Correctly flush L2, as reconstructing the tensors on every iteration effectively put them in the L2, and gave the GPU enough idle time to avoid thermal throttling in a potentially unrealistic way.
The previous behaviour is potentially representative of some use cases (e.g. previous kernel filling L2 with the data in a very specific way) but not standard benchmarking practice.
2025-03-15 20:46:24 +00:00
Liang
e1c070fbef
Merge pull request #65 from Z-NAVY/main
Fix get_col_major_tma_aligned_tensor to handle 2-dimensional inputs
2025-03-14 13:50:08 +08:00
z-navy
3f92607b98 Fix get_col_major_tma_aligned_tensor to handle 2-dimensional inputs 2025-03-13 22:15:16 +08:00
fzyzcjy
e7fff7ef0a
Update m_grouped_gemm.py 2025-03-13 22:09:15 +08:00
Chenggang Zhao
bd2a775528 Code format 2025-03-11 13:26:10 +08:00
Chenggang Zhao
5233bad1e9
Merge pull request #55 from sleepcoo/fix-cudagraph
fix cuda_graph rng check error
2025-03-11 13:25:35 +08:00
sleepcoo
723a00338e fix cuda_graph rng check error 2025-03-11 12:40:42 +08:00
Chenggang Zhao
5e4badc577 Fix type lint 2025-03-10 13:10:16 +08:00
sazc
bed67b234c Minor fix 2025-03-10 13:02:02 +08:00
sazc
ed278eddd3 formats: Optimize get_best_configs implementation 2025-03-10 12:56:14 +08:00
sazc
50cf26cc7c Performance: Configuration algorithms tuned to minimize the impact of tail effects, now up to 1402 TFLOPS 2025-03-10 11:45:05 +08:00
sazc
fcd1dcd99d Performance: reducing the percentage of FFMA interleaving yields a slight performance gain, roughly 0.5% 2025-03-05 17:50:22 +08:00
Chenggang Zhao
9b0dad8640 Add some notes for promotion 2025-03-04 11:42:20 +08:00
Liang
ded740f736
Fix documentation of m_grouped_gemm_fp8_fp8_bf16_nt_contiguous in m_grouped_gemm.py 2025-03-04 11:26:23 +08:00
Chenggang Zhao
dff6bb6f0b Add some notes 2025-03-03 11:35:52 +08:00
Chenggang Zhao
6c5da03ba9 Support more shapes 2025-02-28 10:04:59 +08:00
Chenggang Zhao
b69f630b91 Minor fix util function 2025-02-28 09:46:38 +08:00
Chenggang Zhao
6e10cba207 Minor fix 2025-02-28 09:21:35 +08:00
Liang
fbec9e5eee
Update get_best_configs
a better strategy to choose config
2025-02-27 23:18:52 +08:00
dotrail
488b5fc467 fix typo 2025-02-27 11:53:33 +00:00
Chenggang Zhao
6da94d2d36 Add extra TMA checks 2025-02-27 18:20:57 +08:00
Chenggang Zhao
ca13ce0fab Fix TMA store bugs and code format 2025-02-27 17:57:21 +08:00
Chenggang Zhao
6e55da296f Fix python -O mode issues 2025-02-27 10:42:46 +08:00
AcraeaTerpsicore
96b31fd6bb
fix typo 2025-02-26 18:37:22 +08:00
Chenggang Zhao
a6d97a1c1b Initial commit 2025-02-25 22:52:41 +08:00