mirror of
https://github.com/graphdeco-inria/gaussian-splatting
synced 2025-04-08 23:14:40 +00:00
Empty torch cache after optimizer tensor replacement
Without the cache emptying I got really high VRAM usage and was not able to train beyond 1 million Gaussians with 24GB VRAM. Now I can train to over 5 million Gaussians.
This commit is contained in:
parent
491e17ab3e
commit
389bbe48a5
@ -442,6 +442,8 @@ class GaussianModel:
|
||||
self._scaling = optimizable_tensors["scaling"]
|
||||
self._rotation = optimizable_tensors["rotation"]
|
||||
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
return optimizable_tensors
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user