Empty torch cache after optimizer tensor replacement

Without the cache emptying I got really high VRAM usage and was not able to train beyond 1 million Gaussians with 24GB VRAM. Now I can train to over 5 million Gaussians.
This commit is contained in:
DerThomy 2024-12-22 13:45:05 +01:00 committed by GitHub
parent 491e17ab3e
commit 389bbe48a5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -442,6 +442,8 @@ class GaussianModel:
self._scaling = optimizable_tensors["scaling"]
self._rotation = optimizable_tensors["rotation"]
torch.cuda.empty_cache()
return optimizable_tensors