From 424012cd0dec6d7227887f27bd233ab186997273 Mon Sep 17 00:00:00 2001 From: bkerbl Date: Sun, 9 Jul 2023 14:36:02 +0200 Subject: [PATCH] Changes to README, rescale from 1.6mpix --- README.md | 11 ++++++++--- submodules/diff-gaussian-rasterization | 2 +- utils/camera_utils.py | 6 +++--- 3 files changed, 12 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index a9881e8..084b4ca 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ Bernhard Kerbl*, Georgios Kopanas*, Thomas Leimkühler, George Drettakis (* indi | [Webpage](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/) | [Full Paper](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/3d_gaussian_splatting_high.pdf) | [Video](https://youtu.be/T_kXY43VZnk) | [Other GRAPHDECO Publications](http://www-sop.inria.fr/reves/publis/gdindex.php) | [FUNGRAPH project page](https://fungraph.inria.fr) -[T&T+DB Datasets (650MB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip) | [Pre-trained Models (14 GB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/pretrained/models.zip) | [Viewer Binaries for Windows (60MB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/binaries/viewers.zip) | [Evaluation Renderings TODO](TODO) |
+[T&T+DB Datasets (650MB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip) | [Pre-trained Models (14 GB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/pretrained/models.zip) | [Viewer Binaries for Windows (60MB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/binaries/viewers.zip) | [Evaluation Images](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/evaluation/images.zip) |
![Teaser image](assets/teaser.png) This repository contains the code associated with the paper "3D Gaussian Splatting for Real-Time Radiance Field Rendering", which can be found [here](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/). We further provide the reference images used to create the error metrics reported in the paper, as well as recently created, pre-trained models. @@ -114,7 +114,7 @@ python train.py -s #### --eval Add this flag to use a MipNeRF360-style training/test split for evaluation. #### --resolution / -r - Specifies resolution of the loaded images before training. If provided ```1, 2, 4``` or ```8```, uses original, 1/2, 1/4 or 1/8 resolution, respectively. For all other values, rescales the width to the given number while maintaining image aspect. **If not set and input image width exceeds 1.5 megapixels, inputs are automatically rescaled to this target.** + Specifies resolution of the loaded images before training. If provided ```1, 2, 4``` or ```8```, uses original, 1/2, 1/4 or 1/8 resolution, respectively. For all other values, rescales the width to the given number while maintaining image aspect. **If not set and input image width exceeds 1.6 megapixels, inputs are automatically rescaled to this target.** #### --white_background / -w Add this flag to use white background instead of black (default), e.g., for evaluation of NeRF Synthetic dataset. #### --sh_degree @@ -169,7 +169,7 @@ python train.py -s
-Note that similar to MipNeRF360, we target images at resolutions in the 1-1.5 megapixel range. For convenience, arbitrary-size inputs can be passed and will be automatically resized if their width exceeds 1500 pixels. We recommend to keep this behavior, but you may force training to use your higher-resolution images by specifying ```-r 1```. +Note that similar to MipNeRF360, we target images at resolutions in the 1-1.6 megapixel range. For convenience, arbitrary-size inputs can be passed and will be automatically resized if their width exceeds 1600 pixels. We recommend to keep this behavior, but you may force training to use your higher-resolution images by specifying ```-r 1```. The MipNeRF360 scenes are hosted by the paper authors [here](https://jonbarron.info/mipnerf360/). You can find our SfM data sets for Tanks&Temples and Deep Blending [here](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt+db.zip). If you do not provide an output model directory (```-m```), trained models are written to folders with randomized unique names inside the ```output``` directory. At this point, the trained models may be viewed with the real-time viewer (see further below). @@ -235,6 +235,11 @@ In the current version, this process takes about 7h on our reference machine con python full_eval.py -o --skip_training -m360 -tat -db ``` +If you want to do compute the metrics on our paper's [evaluation images](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/evaluation/images.zip), you can also skip rendering. In this case it is not necessary to provide the source datasets. You can compute metrics for multiple image sets at a time. +```shell +python full_eval.py -m /garden ... --skip_training --skip_rendering +``` +
Command Line Arguments for full_eval.py diff --git a/submodules/diff-gaussian-rasterization b/submodules/diff-gaussian-rasterization index 3a07ac2..4aedd82 160000 --- a/submodules/diff-gaussian-rasterization +++ b/submodules/diff-gaussian-rasterization @@ -1 +1 @@ -Subproject commit 3a07ac2e39b9ba7043ffc8bb98397c3ba6e2532d +Subproject commit 4aedd8226f7257935891049f5a378b0e21d0aa37 diff --git a/utils/camera_utils.py b/utils/camera_utils.py index f8ab276..0d86a29 100644 --- a/utils/camera_utils.py +++ b/utils/camera_utils.py @@ -23,13 +23,13 @@ def loadCam(args, id, cam_info, resolution_scale): resolution = round(orig_w/(resolution_scale * args.resolution)), round(orig_h/(resolution_scale * args.resolution)) else: # should be a type that converts to float if args.resolution == -1: - if orig_w > 1500: + if orig_w > 1600: global WARNED if not WARNED: - print("[ INFO ] Encountered quite large input images (>1.5Mpix), rescaling to 1.5Mpix. " + print("[ INFO ] Encountered quite large input images (>1.6Mpix), rescaling to 1.6Mpix. " "If this is not desired, please explicitly specify '--resolution/-r' as 1") WARNED = True - global_down = orig_w / 1500 + global_down = orig_w / 1600 else: global_down = 1 else: