yukuai
e82c4139da
Revert "Fixed the bug in get_swizzle_mode function related to elem_size setting. ( #115 )"
...
This reverts commit ac428e25e0
.
This PR causes wgrad to hang during testing. Revert it until we resolve the issue
2025-06-23 17:13:36 +08:00
TherLF
ac428e25e0
Fixed the bug in get_swizzle_mode function related to elem_size setting. ( #115 )
2025-06-23 09:37:10 +08:00
shixianc
0c88cd0139
Fix illegal memory address when skipping -1 m indices ( #113 )
...
Co-authored-by: Shixian Cui <shixian@amazon.com>
2025-06-16 10:44:31 +08:00
yukuai26
8dfa329827
Grouped GEMM skip useless computation for unaligned Ms ( #103 )
...
* Grouped GEMM skip useless computation for unaligned Ms
* Update readme.md
* small typo
* Rename variables
* Restore previous indent
* Format
* Refactor tests
* Add `SkipComputation` types
* Bug fixed
* Format
* Fix tests
* Add assertions
* Minor fix
---------
Co-authored-by: yukuai <yukuai@deepseek.com>
Co-authored-by: Chenggang Zhao <chenggangz@deepseek.com>
2025-05-27 13:43:38 +08:00
Chenggang Zhao
391755ada0
Fix JIT tests
2025-05-16 14:39:58 +08:00
Chenggang Zhao
78d8362e7a
Add a missing #pragma once
2025-05-15 18:10:05 +08:00
Chenggang Zhao
ec426b9d66
Merge pull request #100 from deepseek-ai/remove-tuner
...
Refactor some launch-related structures
2025-05-15 17:05:42 +08:00
Chenggang Zhao
104a6ec109
Add __assertfail
2025-05-15 17:04:21 +08:00
Chenggang Zhao
3b412f458a
Unify kwargs
usages
2025-05-15 16:53:52 +08:00
Chenggang Zhao
350989eef3
Unify ceil_div
s
2025-05-15 16:48:32 +08:00
Chenggang Zhao
4373af2e82
Add DG_PRINT_CONFIGS
2025-05-15 16:36:40 +08:00
Chenggang Zhao
816b39053a
Refactor launch-related structures
2025-05-15 16:14:21 +08:00
Chenggang Zhao
e2d6a107ef
Cleanup some useless staffs
2025-05-14 15:46:45 +08:00
Chenggang Zhao
ebf3d2f916
Update plans
2025-05-14 15:05:24 +08:00
Zhean Xu
04278f6dee
Weight gradient kernels for dense and MoE models ( #95 )
...
* Init weight gradient kernels.
* Support unaligned n,k and gmem stride
* Update docs
* Several cleanups
* Remove restrictions on N
* Add stride(0) assertions
---------
Co-authored-by: Chenggang Zhao <chenggangz@deepseek.com>
2025-05-14 14:47:58 +08:00
Chenggang Zhao
d75b218b7b
Update README with NVRTC news
2025-05-07 13:26:58 +08:00
Chenggang Zhao
8702f910e3
Fix 12.9 compatibility
2025-05-07 13:23:40 +08:00
Chenggang Zhao
085b4a1532
Add DG_PRINT_AUTOTUNE
to README
2025-05-07 11:46:52 +08:00
Chenggang Zhao
daec8fd2fc
Fix pipeline stage edge cases
2025-05-07 11:40:34 +08:00
Gabriel Wu
bfe983c4c2
Refactor JIT compilation (+NVRTC support) ( #94 )
...
* [wip] refactor: compile to .cubin
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* refactor: compile to .cubin and add NVRTC option
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* fix: compiler version
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* feat: compat for old drivers
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* feat: save kernel name to file
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* feat: fix win compat
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* fix: windows compat
Signed-off-by: Gabriel Wu <13583761+lucifer1004@users.noreply.github.com>
* feat: make API more general
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* feat: drop support for CUDA<12.3
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* doc: update README
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
* Some lints and refactor
* Refactor runtime
* Several fixes
* Refactor environment variables
* Code format
* Add a TODO
* Compatible with CUDA 12.3
* Fix indent
* Fix typing
* Drop support for Windows
* Add a TODO
---------
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
Signed-off-by: Gabriel Wu <13583761+lucifer1004@users.noreply.github.com>
Co-authored-by: Chenggang Zhao <chenggangz@deepseek.com>
2025-05-07 11:38:14 +08:00
Chenggang Zhao
d374456787
Less stages for small shape K
2025-04-28 10:36:08 +08:00
Chenggang Zhao
86afd0c212
Add two more optimization TODOs
2025-04-27 17:51:11 +08:00
Chenggang Zhao
33e0c3ce40
Update plans
2025-04-24 14:37:53 +08:00
yukuai26
95e81b3dd6
Indivisible TMA ( #90 )
...
Fix indivisible shapes for TMA multicast
---------
Co-authored-by: yukuai <yukuai@deepseek.com>
Co-authored-by: Chenggang Zhao <chenggangz@deepseek.com>
2025-04-23 14:55:14 +08:00
yukuai26
891f35adf5
Support TMA multicast on B with m_grouped_gemm_contiguous. ( #88 )
2025-04-21 09:43:17 +08:00
Chenggang Zhao
83aa960b9b
Fix bugs
2025-04-18 11:55:51 +08:00
Chenggang Zhao
fea9309c1e
Update README
2025-04-18 11:38:52 +08:00
Chenggang Zhao
340d9880f4
Overlap TMA store
2025-04-18 11:18:23 +08:00
Zhean Xu
4499c4ccbb
Refactor MMA template with CUTLASS ( #87 )
...
* Refactor MMA with cutlass
* Update README.md
---------
Co-authored-by: Zhean Xu <xza@deepseek.com>
2025-04-14 17:06:49 +08:00
Chenggang Zhao
37aa127451
Use swizzling instead of padding ( #86 )
...
* Add swizzling params
* Add TMA D descriptor
* Always use STSMx2
* Swizzling draft
* Compatible with padding
* Fix bugs
* Optimize swizzle performance
* Optimize expression
* Optimize TMA issues
* Fix README
* Stricter assertions
2025-04-14 15:20:58 +08:00
Chenggang Zhao
2e7e58011b
Merge pull request #83 from deepseek-ai/tma-1d-store
...
Use 1D TMA store instead of 3D
2025-04-11 11:25:01 +08:00
Chenggang Zhao
b0d64817a7
OOB bugs fixed
2025-04-11 11:00:47 +08:00
Chenggang Zhao
99eb6ec563
Remove useless STSM
2025-04-11 10:45:36 +08:00
Chenggang Zhao
8041ed7164
Use 1D TMA store
2025-04-11 10:42:01 +08:00
Chenggang Zhao
a77009cb14
Make partition pipelined
2025-04-10 18:07:25 +08:00
Chenggang Zhao
5bda27244b
Add CMake support for CLion indexing
2025-04-10 09:57:54 +08:00
Chenggang Zhao
327ec92f69
Update roadmap
2025-04-09 11:44:30 +08:00
Chenggang Zhao
677143be64
Update roadmap
2025-04-09 11:41:36 +08:00
Chenggang Zhao
fed3e4d701
Merge pull request #81 from deepseek-ai/blocktile-256x128
...
Performance: BlockTile 256x128 optimizations enable 1500+ TF FP8
2025-04-09 11:26:40 +08:00
Chenggang Zhao
989c9e3694
Update README
2025-04-09 11:17:47 +08:00
Chenggang Zhao
a9967bc27c
Update README
2025-04-09 11:14:45 +08:00
Chenggang Zhao
5a80e4bb96
Fix indent x2
2025-04-09 11:00:10 +08:00
Chenggang Zhao
bdca8b0624
Fix indent
2025-04-09 10:59:07 +08:00
Chenggang Zhao
4c0cc290c7
Refactor M repetition with loops
2025-04-09 10:50:44 +08:00
Chenggang Zhao
a6524d411a
Larger block N candidates
2025-04-09 10:11:43 +08:00
Chenggang Zhao
48a5f071be
Clean up config heuristics
2025-04-09 10:01:15 +08:00
Chenggang Zhao
ce65d5e33c
Remove unused x256 WGMMA
2025-04-09 09:32:46 +08:00
sazc
97575bf1c6
Performance: BlockTile 256x128 optimizations enable 1500+ TFLOPS FP8 performance on the H800-SXM platform
2025-04-08 17:42:23 +08:00
Chenggang Zhao
b4ecf9c3ff
Fix TMA multicast bugs
2025-04-07 14:34:42 +08:00
Chenggang Zhao
bff5724ded
Code format
2025-04-07 09:32:43 +08:00