This repository was archived by the owner on Nov 21, 2024. It is now read-only.
Democratizing rendering #6
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR democratizes rendering with the following scripts:
obj_to_ppm.py
torch_ppm.py
memmap_to_layers.py
obj_to_ppm.py
andtorch_ppm.py
allow one to convert a segment mesh (.obj
) into a per-pixel-map saved as a numpy memory mapped file. The conversion is very FAST. When reading the flattened UVs, the centroid of each triangle face is batch computed using torch, and a KDTree is built on this structure. A grid of new points is added in 2D at integer locations. These points will be the pixels of the rendered image. The KDTree is queried to quickly identify the M (--tri_batch
parameter) closest triangles to each point. In a batched and parallelized way, always using torch, the barycentric coordinates for every point in each of its M candidate triangles are computed. If for some couple (point - triangle) the barycentric coordinates are in [0,1] and sum up to 1, then the said pair is chosen. Always in a batched and parallelized way with torch, exploiting the barycentric coordinates, the position (and normals) in the 3D scroll volume are obtained by barycentric interpolation. Periodically (and automatically lol) the computed information for the batches is flushed into the ppm memmap.memmap_to_layers.py
is a modification of Julian'sppm_to_layers.py
. It reads the saved ppm that is saved as a numpy memmap in batches, exploiting the slicing properties of a numpy array, and updates the rendered pixels. Since the operation is batched, one can render even a huge segment on a cheap laptop (mine is an i7 with 16GB RAM and no GPU).At the moment, GPU compatibility isn't working since I could not test it locally. But the scripts can be easily readapted to make it work.
Bonus
ppm_writer.py
could be used to save the computed ppm in the.ppm
format used by Virtual Cartographer, however, the logic ofobj_to_ppm.py
should be changed (I was using it in an older version, before switching to numpy memmap).