mirror of
https://github.com/graphdeco-inria/gaussian-splatting
synced 2024-11-21 15:57:45 +00:00
Merge update readme
This commit is contained in:
commit
80916824cf
22
README.md
22
README.md
@ -39,11 +39,7 @@ This research was funded by the ERC Advanced grant FUNGRAPH No 788065. The autho
|
||||
|
||||
We have limited resources for maintaining and updating the code. However, we have added a few new features since the original release that are inspired by some of the excellent work many other researchers have been doing on 3DGS. We will be adding other features within the ability of our resources.
|
||||
|
||||
**Update of October 2024**: We integrated [training speed acceleration](#training-speed-acceleration) and made it compatible with [depth regularization](#depth-regularization), [anti-aliasing](#anti-aliasing) and [exposure compensation](#exposure-compensation).
|
||||
|
||||
|
||||
**Update of August 2024**:
|
||||
We have added/corrected the following features: [depth regularization](#depth-regularization) for training, [anti-aliasing](#anti-aliasing) and [exposure compensation](#exposure-compensation). We have enhanced the SIBR real time viewer by correcting bugs and adding features in the [Top View](#sibr-top-view) that allows visualization of input and user cameras. Please note that it is currently not possible to use depth regularization with the training speed acceleration since they use different rasterizer versions.
|
||||
**Update of October 2024**: We integrated [training speed acceleration](#training-speed-acceleration) and made it compatible with [depth regularization](#depth-regularization), [anti-aliasing](#anti-aliasing) and [exposure compensation](#exposure-compensation). We have enhanced the SIBR real time viewer by correcting bugs and adding features in the [Top View](#sibr-top-view) that allows visualization of input and user cameras.
|
||||
|
||||
**Update of Spring 2024**:
|
||||
Orange Labs has kindly added [OpenXR support](#openxr-support) for VR viewing.
|
||||
@ -524,9 +520,19 @@ Then you can add the following parameter to use the sparse adam optimizer when r
|
||||
|
||||
To have better reconstructed scenes we use depth maps as priors during optimization with each input images. It works best on untextured parts ex: roads and can remove floaters. Several papers have used similar ideas to improve various aspects of 3DGS; (e.g. [DepthRegularizedGS](https://robot0321.github.io/DepthRegGS/index.html), [SparseGS](https://formycat.github.io/SparseGS-Real-Time-360-Sparse-View-Synthesis-using-Gaussian-Splatting/), [DNGaussian](https://fictionarry.github.io/DNGaussian/)). The depth regularization we integrated is that used in our [Hierarchical 3DGS](https://repo-sam.inria.fr/fungraph/hierarchical-3d-gaussians/) paper, but applied to the original 3DGS; for some scenes (e.g., the DeepBlending scenes) it improves quality significantly; for others it either makes a small difference or can even be worse. For example results showing the potential benefit and statistics on quality please see here: [Stats for depth regularization](results.md).
|
||||
|
||||
When training on a synthetic dataset, depth maps can be produced and they do not require further processing to be used in our method. For real world datasets please do the following:
|
||||
1. Get depth maps for each input images, to this effect we suggest using [Depth anything v2](https://github.com/DepthAnything/Depth-Anything-V2?tab=readme-ov-file#usage).
|
||||
2. Generate a `depth_params.json` file using:
|
||||
When training on a synthetic dataset, depth maps can be produced and they do not require further processing to be used in our method.
|
||||
|
||||
For real world datasets depth maps should be generated for each input images, to generate them please do the following:
|
||||
1. Clone [Depth Anything v2](https://github.com/DepthAnything/Depth-Anything-V2?tab=readme-ov-file#usage):
|
||||
```
|
||||
git clone https://github.com/DepthAnything/Depth-Anything-V2.git
|
||||
```
|
||||
2. Download weights from [Depth-Anything-V2-Large](https://huggingface.co/depth-anything/Depth-Anything-V2-Large/resolve/main/depth_anything_v2_vitl.pth?download=true) and place it under `Depth-Anything-V2/checkpoints/`
|
||||
3. Generate depth maps:
|
||||
```
|
||||
python Depth-Anything-V2/run.py --encoder vitl --pred-only --grayscale --img-path <path to input images> --outdir <output path>
|
||||
```
|
||||
5. Generate a `depth_params.json` file using:
|
||||
```
|
||||
python utils/make_depth_scale.py --base_dir <path to colmap> --depths_dir <path to generated depths>
|
||||
```
|
||||
|
Loading…
Reference in New Issue
Block a user