From 9e3afb139f666a33cd8535ee963701a76472d44e Mon Sep 17 00:00:00 2001 From: Blagoy Simandoff <54207300+blagoySimandov@users.noreply.github.com> Date: Sat, 30 Sep 2023 19:31:38 +0100 Subject: [PATCH] Made some minor typographical/so as to ensure clarity changes --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index a0e39a3..d48d931 100644 --- a/README.md +++ b/README.md @@ -63,7 +63,7 @@ The codebase has 4 main components: - An OpenGL-based real-time viewer to render trained models in real-time. - A script to help you turn your own images into optimization-ready SfM data sets -The components have different requirements w.r.t. both hardware and software. They have been tested on Windows 10 and Ubuntu Linux 22.04. Instructions for setting up and running each of them are found in the sections below. +The components have different requirements with respect to both hardware and software. They have been tested on Windows 10 and Ubuntu Linux 22.04. Instructions for setting up and running each of them are found in the sections below. ## Optimizer @@ -196,7 +196,7 @@ python train.py -s Note that similar to MipNeRF360, we target images at resolutions in the 1-1.6K pixel range. For convenience, arbitrary-size inputs can be passed and will be automatically resized if their width exceeds 1600 pixels. We recommend to keep this behavior, but you may force training to use your higher-resolution images by setting ```-r 1```. -The MipNeRF360 scenes are hosted by the paper authors [here](https://jonbarron.info/mipnerf360/). You can find our SfM data sets for Tanks&Temples and Deep Blending [here](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip). If you do not provide an output model directory (```-m```), trained models are written to folders with randomized unique names inside the ```output``` directory. At this point, the trained models may be viewed with the real-time viewer (see further below). +The MipNeRF360 scenes are hosted by the paper authors, [here](https://jonbarron.info/mipnerf360/). You can find our SfM data sets for Tanks&Temples and Deep Blending [here](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip). If you do not provide an output model directory (```-m```), trained models are written to folders with randomized unique names inside the ```output``` directory. At this point, the trained models may be viewed with the real-time viewer (see further below). ### Evaluation By default, the trained models use all available images in the dataset. To train them while withholding a test set for evaluation, use the ```--eval``` flag. This way, you can render training/test sets and produce error metrics as follows: @@ -489,9 +489,9 @@ python convert.py -s --skip_matching [--resize] #If not resizing, Ima - *I'm on Windows and I can't manage to build the submodules, what do I do?* Consider following the steps in the excellent video tutorial [here](https://www.youtube.com/watch?v=UXtuigy_wYc), hopefully they should help. The order in which the steps are done is important! Alternatively, consider using the linked Colab template. -- *It still doesn't work. It says something about ```cl.exe```. What do I do?* User Henry Pearce found a workaround. You can you try adding the visual studio path to your environment variables (your version number might differ); +- *It still doesn't work. It says something about ```cl.exe```. What do I do?* User Henry Pearce found a workaround. You can try adding the visual studio path to your environment variables (your version number might differ); ```C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64``` -Then make sure you start a new conda prompt and cd to your repo location and try this; +Make sure you start a new conda prompt, navigate to your repo location, and then try this.; ``` conda activate gaussian_splatting cd /gaussian-splatting