mirror of
https://github.com/graphdeco-inria/gaussian-splatting
synced 2024-11-25 05:16:33 +00:00
Typo fix, better explanation in reamde
This commit is contained in:
parent
af86519896
commit
b81fc19c34
@ -349,7 +349,7 @@ After extracting or installing the viewers, you may run the compiled ```SIBR_gau
|
|||||||
./<SIBR install dir>/bin/SIBR_gaussianViewer_app -m <path to trained model>
|
./<SIBR install dir>/bin/SIBR_gaussianViewer_app -m <path to trained model>
|
||||||
```
|
```
|
||||||
|
|
||||||
It should suffice to provide the ```-m``` parameter pointing to a trained model directory. Alternatively, you can specify an override location for training input data using ```-s```. To use a specific resolution other than the auto-chosen one, specify ```--rendering-size <width> <height>```.
|
It should suffice to provide the ```-m``` parameter pointing to a trained model directory. Alternatively, you can specify an override location for training input data using ```-s```. To use a specific resolution other than the auto-chosen one, specify ```--rendering-size <width> <height>```. Combine it with ```--force-aspect-ratio``` if you want the exact resolution and don't mind image distortion.
|
||||||
|
|
||||||
**To unlock the full frame rate, please disable V-Sync on your machine and also in the application (Menu → Display).**
|
**To unlock the full frame rate, please disable V-Sync on your machine and also in the application (Menu → Display).**
|
||||||
|
|
||||||
@ -384,7 +384,7 @@ Alternatively, you can use the optional parameters ```--colmap_executable``` and
|
|||||||
## FAQ
|
## FAQ
|
||||||
- *Where do I get data sets, e.g., those referenced in ```full_eval.py```?* The MipNeRF360 data set is provided by the authors of the original paper on the project site. Note that two of the data sets cannot be openly shared and require you to consult the authors directly. For Tanks&Temples and Deep Blending, please use the download links provided at the top of the page.
|
- *Where do I get data sets, e.g., those referenced in ```full_eval.py```?* The MipNeRF360 data set is provided by the authors of the original paper on the project site. Note that two of the data sets cannot be openly shared and require you to consult the authors directly. For Tanks&Temples and Deep Blending, please use the download links provided at the top of the page.
|
||||||
|
|
||||||
- *I don't have 24 GB of VRAM for training, what do I do?* The VRAM consumption is determined by the number of points that are being optimized, which increases over time. If you only want to train to 7k iterations, you will need significantly less. To do the full training routine and avoid running out of memory, you can increase the ```---densify_grad_threshold```, ```--densification_interval``` or reduce the value of ```--densify_until_iter```. Note however that this will affect the quality of the result. Also try setting ```--test_iterations``` to ```-1``` to avoid memory spikes during testing. If ```--densify_grad_threshold``` is very high, no densification should occur and training should complete if the scene itself loads successfully.
|
- *I don't have 24 GB of VRAM for training, what do I do?* The VRAM consumption is determined by the number of points that are being optimized, which increases over time. If you only want to train to 7k iterations, you will need significantly less. To do the full training routine and avoid running out of memory, you can increase the ```--densify_grad_threshold```, ```--densification_interval``` or reduce the value of ```--densify_until_iter```. Note however that this will affect the quality of the result. Also try setting ```--test_iterations``` to ```-1``` to avoid memory spikes during testing. If ```--densify_grad_threshold``` is very high, no densification should occur and training should complete if the scene itself loads successfully.
|
||||||
|
|
||||||
- *24 GB of VRAM for reference quality training is still a lot! Can't we do it with less?* Yes, most likely. By our calculations it should be possible with **way** less memory (~8GB). If we can find the time we will try to achieve this. If some PyTorch veteran out there wants to tackle this, we look forward to your pull request!
|
- *24 GB of VRAM for reference quality training is still a lot! Can't we do it with less?* Yes, most likely. By our calculations it should be possible with **way** less memory (~8GB). If we can find the time we will try to achieve this. If some PyTorch veteran out there wants to tackle this, we look forward to your pull request!
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user