The training relies on PIL to resize the input images and extracts the resized alpha
to mask the rendered image during training. Since PIL pre-multiplies the resized RGB
with the resized alpha, the training produces different Gaussian points depending on
whether the input get resized or not. Moreover, the extracted alpha channel from PIL
is not perfectly binarized, causing floaters around the edges.
* Provide --data_on_cpu option to save VRAM for training
when there are many training images such as in large scene, most of the VRAM are used to store training data, use --data_on_cpu can help reduce VRAM and make it possible to train on GPU with less VRAM
* Fix data_on_cpu effect on default mask
* --data_on_cpu to --data_device
* update readme
* format warning infos