Chenggang Zhao
37aa127451
Use swizzling instead of padding ( #86 )
...
* Add swizzling params
* Add TMA D descriptor
* Always use STSMx2
* Swizzling draft
* Compatible with padding
* Fix bugs
* Optimize swizzle performance
* Optimize expression
* Optimize TMA issues
* Fix README
* Stricter assertions
2025-04-14 15:20:58 +08:00
Chenggang Zhao
2e7e58011b
Merge pull request #83 from deepseek-ai/tma-1d-store
...
Use 1D TMA store instead of 3D
2025-04-11 11:25:01 +08:00
Chenggang Zhao
b0d64817a7
OOB bugs fixed
2025-04-11 11:00:47 +08:00
Chenggang Zhao
99eb6ec563
Remove useless STSM
2025-04-11 10:45:36 +08:00
Chenggang Zhao
8041ed7164
Use 1D TMA store
2025-04-11 10:42:01 +08:00
Chenggang Zhao
a77009cb14
Make partition pipelined
2025-04-10 18:07:25 +08:00
Chenggang Zhao
5bda27244b
Add CMake support for CLion indexing
2025-04-10 09:57:54 +08:00
Chenggang Zhao
327ec92f69
Update roadmap
2025-04-09 11:44:30 +08:00
Chenggang Zhao
677143be64
Update roadmap
2025-04-09 11:41:36 +08:00
Chenggang Zhao
fed3e4d701
Merge pull request #81 from deepseek-ai/blocktile-256x128
...
Performance: BlockTile 256x128 optimizations enable 1500+ TF FP8
2025-04-09 11:26:40 +08:00
Chenggang Zhao
989c9e3694
Update README
2025-04-09 11:17:47 +08:00
Chenggang Zhao
a9967bc27c
Update README
2025-04-09 11:14:45 +08:00
Chenggang Zhao
5a80e4bb96
Fix indent x2
2025-04-09 11:00:10 +08:00
Chenggang Zhao
bdca8b0624
Fix indent
2025-04-09 10:59:07 +08:00
Chenggang Zhao
4c0cc290c7
Refactor M repetition with loops
2025-04-09 10:50:44 +08:00
Chenggang Zhao
a6524d411a
Larger block N candidates
2025-04-09 10:11:43 +08:00
Chenggang Zhao
48a5f071be
Clean up config heuristics
2025-04-09 10:01:15 +08:00
Chenggang Zhao
ce65d5e33c
Remove unused x256 WGMMA
2025-04-09 09:32:46 +08:00
sazc
97575bf1c6
Performance: BlockTile 256x128 optimizations enable 1500+ TFLOPS FP8 performance on the H800-SXM platform
2025-04-08 17:42:23 +08:00
Chenggang Zhao
b4ecf9c3ff
Fix TMA multicast bugs
2025-04-07 14:34:42 +08:00
Chenggang Zhao
bff5724ded
Code format
2025-04-07 09:32:43 +08:00
Chenggang Zhao
3ea3cb203c
Merge pull request #80 from abcdabcd987/fix-link-error
...
Fix linking error from ODR violation
2025-04-07 09:31:58 +08:00
Chenggang Zhao
b0868c9014
Merge pull request #79 from yizhang2077/lru-cache-opt
...
Add lru-cache for get_best_configs to avoid repeated calculation
2025-04-07 09:31:30 +08:00
Lequn Chen
611e3f659d
Fix linking error from ODR violation
2025-04-05 17:35:23 +00:00
Yi Zhang
776bd0cccc
add lru-cache to avoid repeated calculation
2025-04-04 12:44:26 +08:00
Chenggang Zhao
c187c23ba8
Merge pull request #78 from deepseek-ai/tma-3d-padding
...
Solving bank conflict via padding and TMA 3D store
2025-04-03 16:06:10 +08:00
Chenggang Zhao
d14962f072
Add DG_NVCC_OVERRIDE_CPP_STANDARD
2025-04-03 15:53:29 +08:00
Chenggang Zhao
3a5539b7db
Use c++20
2025-04-03 15:47:59 +08:00
Chenggang Zhao
6db7e1863b
Solve STSM bank conflict via padding and 3D TMA
2025-04-03 15:39:35 +08:00
Liang
c57699ac93
Merge pull request #76 from YLGH/patch-1
...
Update nvcc flag c++20
2025-03-26 09:52:47 +08:00
YLGH
b7db15ce94
Update nvcc flag c++20
...
Needed for fconcepts
2025-03-25 14:15:39 -07:00
Chenggang Zhao
8002b769c0
Update README
2025-03-25 18:13:24 +08:00
Chenggang Zhao
a5645d7afa
Merge pull request #74 from deepseek-ai/larger-block
...
Performance: Larger BlockTile optimizations enable 1470+ TF FP8 on the "H800"-SXM
2025-03-25 18:07:33 +08:00
Chenggang Zhao
55ab91f72f
Update performance
2025-03-25 18:06:47 +08:00
Chenggang Zhao
09d097f84d
Add some notes
2025-03-25 17:41:49 +08:00
Chenggang Zhao
25db8de345
Better performance
2025-03-25 17:34:06 +08:00
Chenggang Zhao
1999d553e5
Lower TMA requirement
2025-03-25 17:18:53 +08:00
Chenggang Zhao
ddccb230ca
Fix NVCC branch divergence
2025-03-25 17:12:51 +08:00
Chenggang Zhao
9c4f6f53f5
Optimize compilation speed
2025-03-25 16:51:21 +08:00
Chenggang Zhao
612dd57001
Simplify code
2025-03-25 16:45:20 +08:00
Chenggang Zhao
046fab64b7
Fix grouped GEMM cases
2025-03-25 16:41:44 +08:00
Chenggang Zhao
7768319ffe
Remove unaligned predicates
2025-03-25 16:32:40 +08:00
Chenggang Zhao
3497428a5e
Minor fix
2025-03-25 15:16:26 +08:00
Chenggang Zhao
7ffb118e54
Support multicasting on B
2025-03-25 14:56:42 +08:00
Chenggang Zhao
742fb1c8a5
Compilation-time GCD
2025-03-25 13:41:28 +08:00
Chenggang Zhao
b922e64cb2
Support block size 160
2025-03-25 13:37:59 +08:00
sazc
46eb0d08fb
Performance: Larger BlockTile optimizations enable 1470+ TFLOPS FP8 performance on the H800-SXM platform
2025-03-25 10:44:57 +08:00
Liang
3b3783d06c
Merge pull request #68 from ademeure/flush_l2_pr
...
Correctly flush L2 (+performance impact & upcoming optimization fork)
2025-03-16 09:16:34 +08:00
ademeure
6cbff5778f
Correctly flush L2, as reconstructing the tensors on every iteration effectively put them in the L2, and gave the GPU enough idle time to avoid thermal throttling in a potentially unrealistic way.
...
The previous behaviour is potentially representative of some use cases (e.g. previous kernel filling L2 with the data in a very specific way) but not standard benchmarking practice.
2025-03-15 20:46:24 +00:00
Liang
e1c070fbef
Merge pull request #65 from Z-NAVY/main
...
Fix get_col_major_tma_aligned_tensor to handle 2-dimensional inputs
2025-03-14 13:50:08 +08:00