mirror of
https://github.com/deepseek-ai/FlashMLA
synced 2025-06-26 18:15:54 +00:00
Fix the bug of fma
Hi, I find in scale_apply_exp2, The code comments also mentioned this issue: https://github.com/pytorch/pytorch/issues/121558 This issue is that the ffma instruction generates some calculation errors during the flash attention compared to fadd and fmul separated. For fadd and fmul, the calculation is: round_fp32(x_i * scale) - round_fp32(x_i * scale) For max(x), this value is 0. But For ffma, the calculation is: x_i * scale - round_fp32(x_i * scale) Although the accuracy of ffma calculations has actually improved, there have been errors in the values. We can raise this issue by changing the initialization value of q k, and the final outs will all be 0: q = torch.full((b, s_q, h_q, d), 133120.0) blocked_k = torch.full((block_table.numel(), block_size, h_kv, d), 133120.0) If we define UNFUSE_FMA, This problem has been alleviated, but it still cannot pass the cal-diff test. I am not sure if it is an accuracy issue, but I think it is necessary to fix the fma bug first.
This commit is contained in:
parent
b31bfe72a8
commit
d626421fff
1
setup.py
1
setup.py
@ -65,6 +65,7 @@ ext_modules.append(
|
|||||||
"-std=c++17",
|
"-std=c++17",
|
||||||
"-DNDEBUG",
|
"-DNDEBUG",
|
||||||
"-D_USE_MATH_DEFINES",
|
"-D_USE_MATH_DEFINES",
|
||||||
|
"-DUNFUSE_FMA",
|
||||||
"-Wno-deprecated-declarations",
|
"-Wno-deprecated-declarations",
|
||||||
"-U__CUDA_NO_HALF_OPERATORS__",
|
"-U__CUDA_NO_HALF_OPERATORS__",
|
||||||
"-U__CUDA_NO_HALF_CONVERSIONS__",
|
"-U__CUDA_NO_HALF_CONVERSIONS__",
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user