Skip to content
Permalink
Browse files

Initial public release

  • Loading branch information...
pmh47 committed Aug 16, 2018
0 parents commit 81c529df5467fcc5829a9ad77936848040b61e66
20 LICENSE
@@ -0,0 +1,20 @@
Copyright 2017-2018 Paul Henderson

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

158 README.md
@@ -0,0 +1,158 @@

# DIRT: a fast Differentiable Renderer for TensorFlow

DIRT is a library for TensorFlow, that provides operations for rendering 3D meshes.
It supports computing derivatives through geometry, lighting, and other parameters.
DIRT is very fast: it uses OpenGL for rasterisation, running on the GPU, which allows
lightweight interoperation with CUDA.


## Citation

If you use DIRT in your research, you should cite: *Learning to Generate and Reconstruct 3D Meshes with only 2D Supervision* (P. Henderson and V. Ferrari, BMVC 2018).

The appropriate bibtex entry is:
```
@inproceedings{henderson18bmvc,
title={Learning to Generate and Reconstruct 3D Meshes with only 2D Supervision},
author={Paul Henderson and Vittorio Ferrari},
booktitle={British Machine Vision Conference (BMVC)},
year={2018}
}
```


## Why is DIRT useful?

Drawing 3D (or 2D) shapes *differentiably* is challenging in TensorFlow. For example, you could create a tensor containing a white square on a black background using the following:
```python
import tensorflow as tf
canvas_width, canvas_height = 128, 128
centre_x, centre_y = 32, 64
square_size = 16
xs, ys = tf.meshgrid(tf.range(canvas_width), tf.range(canvas_height))
x_in_range = tf.less_equal(tf.abs(xs - centre_x), square_size / 2)
y_in_range = tf.less_equal(tf.abs(ys - centre_y), square_size / 2)
pixels = tf.cast(tf.logical_and(x_in_range, y_in_range), tf.float32)
```
However, if you calculate gradients of the pixels with respect to `centre_x` and `centre_y`, they will always be zero -- whereas for most use-cases, they should be non-zero at the boundary of the shape.

DIRT provides a single TensorFlow operation, `rasterise`, that renders shapes differentiably. Moreover, it includes helper code that supports 3D projection, lighting, etc.
This allows full 2D or 3D scenes to be assembled directly in TensorFlow, with gradients flowing through the geometry, lighting and surface parameters.

Using DIRT, the above example becomes:
```python
import tensorflow as tf
import DIRT
canvas_width, canvas_height = 128, 128
centre_x, centre_y = 32, 64
square_size = 16
# Build square in screen space
square_vertices = tf.constant([[0, 0], [0, 1], [1, 1], [1, 0]], dtype=tf.float32) * square_size - square_size / 2.
square_vertices += [centre_x, centre_y]
# Transform to homogeneous coordinates in clip space
square_vertices = square_vertices * 2. / [canvas_width, canvas_height] - 1.
square_vertices = tf.concat([square_vertices, tf.zeros([4, 1]), tf.ones([4, 1])], axis=1)
pixels = DIRT.rasterise(
vertices=square_vertices,
faces=[[0, 1, 2], [0, 2, 3]],
vertex_colors=tf.ones([4, 1]),
background=tf.zeros([canvas_height, canvas_width, 1]),
height=canvas_height, width=canvas_width, channels=1
)[:, :, 0]
```


## Requirements

- an Nvidia GPU; the earliest drivers we have tested with are v367
- Linux; we have only tested on Ubuntu, but other distributions should work
- a GPU-enabled install of TensorFlow, version 1.4 or later recommended
- python 2.7.9 or newer (python3 has not been tested)
- cmake 3.8 or newer
- gcc 4.9 or newer


## Installation

**Before** installing, you should activate a virtualenv with `tensorflow-gpu` installed (or ensure your system python has that package), as DIRT will use this to search for appropriate TensorFlow headers during installation.

Simply clone this repository, then install with pip:
```
git clone https://github.com/pmh47/DIRT.git
cd DIRT
pip install .
```

If you plan to modify the DIRT code, you may prefer to install in development mode:
```
cd DIRT
mkdir build ; cd build
cmake ../csrc
make
cd ..
pip install -e .
```

To sanity-check your build, run `python tests/square_test.py`, which should produce the output `successful: all pixels agree`.

#### Troubleshooting:

- You should ensure that libGL and libEGL are in a location on LD_LIBRARY_PATH, and that these are the versions shipped with your Nvidia driver. In particular, if you have install Mesa, this may have overwritten libGL with its own version, which will not work with DIRT

- You should ensure that compute + graphics mode is enabled (through nvidia-smi) for your GPU


## Usage

A simple, 2D example was given above.
More sophisticated examples rendering 3D meshes are in the `samples` folder.

DIRT uses OpenGL for rasterisation, and uses OpenGL conventions for coordinate systems. In particular, the coordinates passed to `rasterise` are in OpenGL clip space, and the matrix helper functions assume that the camera points along the *negative* z-axis in world space.
The only exception is that rasterised images follow the TensorFlow convention of having the top row first.

DIRT can be used in *direct* or *deferred* shading modes.
Direct uses the rasterise operation directly to produce the final pixels, with simple Gouraud shading.
Lighting calculations are performed per-vertex before rasterisation, and colours are interpolated between vertices linearly in 3D space).
This is very efficient and simple to work with, but limits certain lighting effects (e.g. specular highlights) and does not allow texturing.
Deferred uses the rasterise operation to generate a G-buffer, that captures the scene geometry at each pixel (typically the underlying vertex location and normal).
Then, lighting calculations are performed per-pixel in a second pass.


## How does DIRT work?

#### Theory

DIRT uses filter-based derivatives, inspired by OpenDR (Loper and Black, ECCV 2014).
It makes considerable effort to return correctly-behaving derivatives even in cases of self-occlusion, where other differentiable renderers can fail.


#### Implementation

DIRT uses OpenGL for rasterisation, as this is fast, accurate, and very mature.
We use Nvidia's OpenGL / CUDA interop to allow the vertices and pixels to remain on the same GPU both for processing by TensorFlow and for rasterisation, thus minimising copying overhead compared with other approaches.
To avoid having to create an on-screen context for rendering, we use an Nvidia extension to EGL, that allows creating an OpenGL context bound to a GPU but not a physical display.


## Alternatives to DIRT

Several other differentiable renderers have been described and released in recent years:

- [OpenDR](https://github.com/mattloper/opendr) (Loper and Black, ECCV 2014) supports Gouraud shading using Mesa CPU-based rendering, and uses filter-based derivatives similar to DIRT. It uses its own custom automatic differentiation framework written in python, hence does not integrate smoothly with TensorFlow

- [Neural 3D Mesh Renderer](https://github.com/hiroharu-kato/neural_renderer) (Kato et al., CVPR 2018) supports similar functionality to DIRT, using a slightly different formulation for the approximate derivatives, but implements a custom rasterisation operation, rather than using OpenGL. It integrates with Chainer, but not TensorFlow (a PyTorch re-implementation is also available)

- [tf_mesh_renderer](https://github.com/google/tf_mesh_renderer) (Genova et al., CVPR 2018) similarly uses custom rendering (on the CPU in this case), but integrates directly with TensorFlow


## Contributing

Pull requests welcome!

@@ -0,0 +1,48 @@
cmake_minimum_required(VERSION 3.8) # 3.8 gives us built-in CUDA support

project(dirt LANGUAGES CXX CUDA)

find_package(OpenGL REQUIRED)

# Search for EGL; nvidia drivers ship the library but not headers, so we redistribute those
find_path(EGL_INCLUDE_DIR NAMES EGL/egl.h PATHS ${CMAKE_CURRENT_SOURCE_DIR}/../external REQUIRED)
find_library(EGL_LIBRARIES NAMES egl EGL REQUIRED)

# Search for cuda headers (using the form of path that tensorflow includes them with), based on cmake-inferred nvcc, or $CUDA_HOME
get_filename_component(NVCC_DIR ${CMAKE_CUDA_COMPILER} DIRECTORY)
find_path(CUDA_INCLUDE_DIR NAMES cuda/include/cuda.h HINTS ${NVCC_DIR}/../.. PATHS ENV CUDA_HOME REQUIRED)

# Ask tensorflow for its include path; one should therefore make sure cmake is run with the venv active that the op will be used in
execute_process(COMMAND python -c "import tensorflow; print(tensorflow.sysconfig.get_include())" OUTPUT_VARIABLE Tensorflow_default_INCLUDE_DIRS OUTPUT_STRIP_TRAILING_WHITESPACE)
set(Tensorflow_INCLUDE_DIRS "${Tensorflow_default_INCLUDE_DIRS}" CACHE PATH "Tensorflow include path")

# Ask tensorflow for its library path
# If using tensorflow earlier than v1.4, this will not work, but can be skipped entirely
execute_process(COMMAND python -c "import tensorflow; print(tensorflow.sysconfig.get_lib())" OUTPUT_VARIABLE Tensorflow_default_LIB_DIR OUTPUT_STRIP_TRAILING_WHITESPACE)
set(Tensorflow_LIB_DIR "${Tensorflow_default_LIB_DIR}" CACHE PATH "Tensorflow library path")
find_library(Tensorflow_LIBRARY tensorflow_framework HINTS ${Tensorflow_LIB_DIR} REQUIRED DOC "Tensorflow framework library; for tensorflow < 1.4, you can set this to blank")

# in the following, we need ../external/tensorflow for cuda_config.h in tf versions with #16959 unfixed
include_directories(SYSTEM ../external/tensorflow ${NSYNC_INCLUDE_DIR} ${CUDA_INCLUDE_DIR} ${EGL_INCLUDE_DIR} ${OPENGL_INCLUDE_DIR})
include_directories(${Tensorflow_INCLUDE_DIRS} ${Tensorflow_INCLUDE_DIRS}/external/nsync/public)

set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -ffast-math")

set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -arch=sm_30 --expt-relaxed-constexpr")

add_library(
rasterise SHARED
rasterise_egl.cpp rasterise_egl.cu
rasterise_grad_egl.cpp rasterise_grad_egl.cu rasterise_grad_common.h
shaders.cpp shaders.h
gl_dispatcher.h concurrentqueue.h blockingconcurrentqueue.h gl_common.h hwc.h
)

target_compile_features(rasterise PUBLIC cxx_std_11)
target_link_libraries(rasterise ${EGL_LIBRARIES} ${OPENGL_LIBRARIES} ${Tensorflow_LIBRARY})

# Put the compiled library in the python package folder, rather than whatever build folder is being used
set_target_properties(
rasterise PROPERTIES
LIBRARY_OUTPUT_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/../dirt
)
Oops, something went wrong.

0 comments on commit 81c529d

Please sign in to comment.
You can’t perform that action at this time.