Skip to content

List of Project Ideas for GSoC 2017

François Beaune edited this page Jun 12, 2017 · 9 revisions

Introduction

If you haven't read it already, please start with our How to Apply guide.

Below is a list of high impact projects that we think are of appropriate scope and complexity for the program.

There should be something for anyone fluent in C++ and interested in writing clean, performant code in a modern and well maintained codebase.

Projects are first sorted by application:

  • appleseed: the core rendering library
  • appleseed.studio: a graphical application to build, edit, render and debug scenes
  • appleseed-max: a native plugin for Autodesk 3ds Max
  • New standalone tools

Then, for each application, projects are sorted by ascending difficulty. Medium/hard difficulties are not necessarily harder in a scientific sense (though they may be), but may simply require more refactoring or efforts to integrate the feature into the software.

Similarly, easy difficulty doesn't necessarily mean the project will be a walk in the park. As in all software projects there may be unexpected difficulties that you will need to identify and overcome with your mentor.

Several projects are only starting point for bigger adventures. For those, we give an overview of possible avenues to expand on them after the summer.

appleseed projects:

appleseed.studio projects:

appleseed-max projects:

New standalone tools:

(Renders and photographs below used with permission of the authors.)

appleseed Projects

Easy Difficulty

Project 1: Resumable renders

  • Required skills: C++
  • Challenges: None in particular
  • Primary mentor: Franz
  • Secondary mentor: Esteban

Rendering a single image can take a very long time, depending on image resolution, scene complexity and desired level of quality.

It would be convenient to allow stopping a render and restarting it later. This would, for instance, enable the following typical workflow:

  1. Render for a few seconds, or a few minutes, then interrupt the render.
  2. Work with the low quality render as if it was high quality: apply tone mapping or other forms of post-processing, preview the entire animation, etc.
  3. When time allows, resume rendering where it left off to get a smoother result.

Tasks:

  1. Determine what needs to be persisted to disk to allow resuming an interrupted render.
  2. Define a portable file format for interrupted renders. Can we reuse an existing format? For instance, could we store resume information as metadata in an OpenEXR (*.exr) file?
  3. When rendering is stopped, write an "interrupted render file".
  4. Allow appleseed.cli (the command line client) to start rendering from a given "interrupted render file".

Possible follow-ups:

  1. Expose the feature in appleseed.studio.
  2. List and present all available interrupted renders when opening a project.
  3. Identify when a render cannot be resumed, e.g., because the scene has changed in the meantime.

Project 2: IES light profiles

  • Required skills: C++
  • Challenges: None in particular
  • Primary mentor: Esteban
  • Secondary mentor: Franz

IES light profiles describe light distribution in luminaires. This page has more information on the topic. IES light profile specifications can be found on this page. Many IES profiles can be downloaded from this page.

There are no particular difficulties with this project. The parsing code can be a bit hairy but there are many open source implementations that can be leveraged or peeked at in case of ambiguity. The sampling code is probably the most interesting part but it shouldn't be difficult.

Tasks:

  1. Learn about IES profiles and understand the concepts involved.
  2. Determine what subset of the IES-NA specification needs to be supported.
  3. Investigate whether libraries exist to parse IES files.
  4. If no suitable library can be found, implement our own parsing code (IES profiles are text files with a mostly parsable structure).
  5. Add a new light type that implements sampling of IES profiles.
  6. Create a few test scenes demonstrating the results.

Project 3: Single-file, zip-based project archives

  • Required skills: C++
  • Challenges: Some refactoring
  • Primary mentor: Franz
  • Secondary mentor: Esteban

Today, appleseed projects are made up of many individual files, some of them very small. This is convenient when editing the project, but not when sharing it with other users or transporting it across a network.

The idea of this project is to introduce a new format that packs an entire project into a single archive file, preferably a genuine ZIP file (in which case we suggest the .appleseedz file extension).

appleseed should be able to natively read this file format, without unpacking it first. Non-packed project files must continue to be supported.

If we decide to adopt a format other than ZIP we should also investigate whether it makes sense to provide standalone tools to pack and unpack projects.

Should we decide to go with the ZIP format, packed project files will initially be manually created using any standard ZIP archiver.

Tasks:

  1. Determine if we can efficiently read individual files from a ZIP archive without unpacking it.
  2. Determine if we can instruct OpenImageIO to read textures from an archive file.
  3. If the ZIP format is not suitable, design a simple (preferably LZ4-compressed) file format for packed projects.
  4. Refactor .obj and .binarymesh file readers to allow reading from a packed project.
  5. Refactor .appleseed file reader to allow reading from a packed project.
  6. Refactor the image pipeline to allow reading textures from a packed project (with OpenImageIO).
  7. Implement reading packed project files (will automatically work in all appleseed tools, including appleseed.cli and appleseed.studio).

Possible follow-ups:

  1. Implement a standalone command line utility to pack/unpack projects.
  2. Expose packing/unpacking in appleseed.studio.
  3. Adapt rendernode.py and rendermanager.py scripts to work natively with packed project files.

Medium Difficulty

Project 4: Volume rendering

  • Required skills: C++
  • Challenges: Understanding and updating the existing path tracing code
  • Primary mentor: Esteban
  • Secondary mentor: Franz

Photograph by Lucas Zimmermann (Photograph by Lucas Zimmermann, from the "Traffic Light 2.0" series — Source)

Volume rendering (or volumetric rendering) is one of the most requested features in appleseed. Currently, appleseed only renders the surface of objects, that is, how light bounces off objects, and treats the space between objects as a void. Volume rendering implies to compute how light is absorbed and scattered by air, smoke or fog molecules, or by denser media such as milk or marble.

This is a vast topic. This project will only be scratching the surface of what volume rendering implies. Depending on the student, we may limit the project to simple homogeneous volumes and single scattering.

Tasks:

  1. Read about volume rendering learning what homogenous volumes and single scattering are.
  2. Implement basic absorption and scattering formula as a set of simple unit tests.
  3. Modify the path tracing code to implement basic ray marching.
  4. Compute attenuation during ray marching.
  5. Add single scattering.
  6. Add a new Volume entity type to appleseed and expose it in appleseed.studio.

Possible follow-ups:

  1. Implement multiple scattering.
  2. Implement (multiple?) importance sampling.
  3. Add support for OpenVDB, a volume description file format.
  4. Compare diffuse profile-based and volume-based subsurface scattering to check that the former methods are working correctly.
  5. Recreate the scene from the photograph above and render it with appleseed.

Project 5: Adaptive image plane sampling

  • Required skills: C++, reading and understanding scientific papers
  • Challenges: None in particular
  • Primary mentor: Franz
  • Secondary mentor: Esteban

Illustration by Yining Karl Li
(Illustration from Adaptive Sampling by Yining Karl Li.)

Like most renderers, appleseed has two types of "image plane sampler", i.e., it has two ways of allocating samples (which is the basic unit of work) to pixels:

  • The uniform sampler allocates a fixed, equal number of samples to every pixel. This is simple and works well, but a lot of resources are wasted in "easy" areas that don't require many samples to get a smooth result, while "difficult" area remain noisy.
  • The adaptive sampler tries to allocate more samples to difficult pixels and fewer samples to easy pixels. While it may work acceptably with a bit of patience, it's hard to adjust, and it can lead to flickering in animations.

The idea of this project is to replace the existing adaptive sampler by a new one based on a more rigorous theory. We settled for now on a technique described in Adaptive Sampling by Yining Karl Li. However, the project should start with a quick survey of available techniques.

Tasks:

  1. Survey available modern techniques to adaptively sample the image plane.
  2. Implement the chosen adaptive sampling algorithm.
  3. Make sure the new adaptive sampler works well in animation (absence of objectionable flickering).
  4. Determine which settings should be used as default.
  5. Expose adaptive sampler settings in appleseed.studio.
  6. Compare quality and render times with the uniform image sampler.
  7. Remove the old sampler, and add an automatic project migration step to maintain backward compatibility.

Project 6: Procedural assemblies

  • Required skills: C++
  • Challenges: Light refactoring
  • Primary mentor: Esteban
  • Secondary mentor: Franz

Procedural assemblies are plugins that can generate parts of scenes procedurally, as opposed to loading geometry from mesh and curve files. They are very powerful as they can easily generate repetitive or mathematical structures with code. They are commonly used in hair, fur, and procedural instancing tools to describe the scene geometry to renderers.

The idea of this project is to add support for procedural assembly plugins to appleseed. This would allow appleseed to support geometry generation tools commonly used by artists such as Yeti by peregrine*labs or Autodesk XGen.

Tasks:

  1. Design and implement an API for procedural geometry generation.
  2. Implement plugin loading and execution at scene setup time.
  3. Write some simple procedural geometry generators.
  4. Document the procedural geometry generation API in the wiki.

Possible followups:

  1. Investigate ways to assign materials and modify other attributes of procedurally generated geometry.

Hard Difficulty

Project 7: Improved many-light sampling

  • Required skills: C++, reading and understanding scientific papers
  • Challenges: Refactoring
  • Primary mentor: Franz
  • Secondary mentor: Esteban

Illustration by Nathan Vegdahl
(Illustration by Nathan Vegdahl — Source)

To maximize rendering efficiency, it is important for the renderer to chooses the right light sources to sample, based on the point currently being shaded. Typically, the renderer will select a few lights at random and sample those. By repeating the process many times (over many pixel samples) an accurate estimation of total incident light is computed.

Today, appleseed has a pretty basic light sampling strategy:

  • It always sample all "non-physical lights" (point lights, purely directional lights, spot lights)
  • For area lights, it computes N "light samples". Computing a light sample involves:
    1. Choosing a light-emitting triangle at random, based on its surface area (the larger the surface area the more likely it is to be chosen)
    2. Then choosing a point on the selected triangle, at random (uniformly)

While this simple algorithm works well for scenes with few lights, it leads to considerable noise when the number of lights is large.

Nathan Vegdahl, author of the Psychopath experimental renderer, has come up with an interesting technique that he first described on appleseed-dev. He later posted additional details on ompf2.

The paper Stochastic Light Culling, recently posted in the Journal of Computer Graphics Techniques, describes an algorithm similar to Nathan's. The two techniques are discussed and compared on appleseed-dev.

Here is some sample code from Psychopath implementing Nathan's technique: light_tree.hpp.

Tasks:

  1. Establish clearly the differences between Nathan's algorithm and Stochastic Light Culling. Possibly get in touch with Nathan.
  2. Determine the limitations of the technique.
  3. Determine if, and how, we need to augment/modify the Light and EDF interfaces.
  4. Build a simple light tree that spawns all assembly instances, possibly limited to point lights and/or area lights, depending on what's easiest.
  5. Modify the LightSampler class in order to sample light tree.

Possible follow-ups:

  1. Figure out if having one light tree per assembly makes sense, and if it does, how to implement it considering that we don't want to put light trees in assemblies as this would couple a rendering technique (light tree sampling) with a modeling technique (assemblies).
  2. Improve heuristics used when computing light tree node probabilities.
  3. Can we improve anything?

Project 8: Switch to Embree

  • Required skills: C++
  • Challenges: Heavy refactoring
  • Primary mentor: Franz
  • Secondary mentor: Esteban

Imperial Crown of Austria
(Imperial Crown of Austria, model by Martin Lubich, rendered with Intel EmbreeSource)

The act of tracing rays through the scene is by far the most expensive activity performed by appleseed during rendering. Our ray tracing kernel was state-of-the-art around 2006, there are faster algorithms (QBVH for instance) and faster implementations based on vectorized instructions.

Intel has been developing a pure ray tracing library called Embree which offers state-of-the-art performance on Intel (and supposedly AMD) CPUs. For many years Embree was lacking essential features that prevented its adoption in appleseed, such as multi-step deformation motion blur. Today it seems that Embree offers everything we need and that a switch is finally possible.

Tasks:

  1. Make sure that Embree fully supports our needs (double precision ray tracing, motion blur). If it does not, determine if we can still use it for some assemblies while using the traditional intersector for others.
  2. Determine which parts of the ray tracing kernel (trace context, intersector, assembly trees, region trees, triangle trees) we can keep and which parts to discard.
  3. Add Embree to the build.
  4. Write a minimal integration of Embree into appleseed (no motion blur, no instancing).
  5. Compare performances and CPU profiles with the existing ray tracing kernel.
  6. Determine how to support motion blur or instancing with Embree.

Possible follow-ups:

  1. Add support for remaining features (motion blur or instancing, depending on what was implemented during the project).
  2. Run whole test suite and investigate regressions if any.
  3. Clean up integration.

Project 9: Implicit shapes

  • Required skills: C++
  • Challenges: Heavy refactoring
  • Primary mentor: Esteban
  • Secondary mentor: Franz

Currently appleseed only supports geometry defined by triangle meshes and curves. While this is very flexible it's rather inefficient for simple shapes like spheres and cylinders.

The objective of this project would be to add support for ray tracing simple shapes directly without converting them into triangle meshes.

In addition it would allow, in the future, the use of these shapes as light sources and use specialized light sampling algorithms whenever possible.

Tasks:

  1. Determine the best way to generalize scene intersection code to handle implicit shapes.
  2. Refactor existing code.
  3. Write intersection routines for simple shapes like spheres, disks and rectangles.

Possible follow-ups:

  1. Better sampling of lights defined by implicit shapes (rectangles, disks, spheres...)
  2. Rendering particle systems as a collection of implicit shapes.

Project 10: Unbiased Photon Gathering

  • Required skills: C++, reading and understanding scientific papers
  • Challenges: Getting it right
  • Primary mentor: Franz
  • Secondary mentor: Esteban

Unbiased Photon Gathering for Light Transport Simulation
(Illustration from Unbiased Photon Gathering for Light Transport Simulation.)

Light transport is the central problem solved by a physically-based global illumination renderer. It implies finding ways to connect, with straight lines (in the absence of scattering media such as smoke or fog), light sources with the camera taking into account how light is reflected by objects. Efficient light transport is one of the most difficult problem in physically-based rendering.

The paper Unbiased Photon Gathering for Light Transport Simulation proposes to trace photons from light sources (like it is done in the photon mapping algorithm) and use these photons to establish paths from lights to the camera in an unbiased manner. This is unlike traditional photon mapping which performs photon density estimation.

Since appleseed already features an advanced photon tracer it should be possible to implement this algorithm, and play with it, with reasonable effort.

Tasks:

  1. Figure out what we need to store in photons in order to allow reconstruction of a light path.
  2. Add a new lighting engine, drawing inspiration from the SPPM lighting engine which also needs to trace both photons from lights, and paths from the camera.
  3. Trace camera paths. At the extreme end of these paths lookup the nearest photon and establish a connection.
  4. Render simple scenes, such as the built-in Cornell Box, for which we know exactly what to expect, and carefully compare results of the new algorithm with ground truth images.

Project 11: Spectral rendering using tristimulus colours

  • Required skills: C++, reading and understanding scientific papers
  • Challenges: Refactoring
  • Primary mentor: Esteban
  • Secondary mentor: Franz

Physically Meaningful Rendering using Tristimulus Colours
(Illustration from Physically Meaningful Rendering using Tristimulus Colours.)

One of most unique features of appleseed is the possibility of rendering in both RGB (3 bands) and spectral (31 bands) colorspaces and to mix both color representations in the same scene.

This feature requires the ability to convert colors from RGB to a spectral representation. While the conversion is not well defined in a mathematical sense, many RGB colors map to the same spectral color, there are some algorithms that try to approximate this conversion.

The goal of this project would be to implement an improved RGB to spectral color conversion method based on this paper: Physically Meaningful Rendering using Tristimulus Colours.

This would improve the correctness of our renders for scenes containing both RGB and spectral colors.

Tasks:

  1. Implement the improved RGB to spectral color conversion method.
  2. Add unit tests and test scenes.
  3. Render the whole test suite, identify differences due to the new color conversion code and update reference images as appropriate.

appleseed.studio Projects

Easy Difficulty

Project 12: Material library and browser

  • Required skills: C++, Qt
  • Challenges: None in particular
  • Primary mentor: Franz
  • Secondary mentor: Esteban

We need to allow appleseed.studio users to choose pre-made, high quality materials from a material library, as well as to save their own materials into a material library, instead of forcing them to recreate all materials in every scene. Moreover, even tiny scenes can have many materials. Artists can't rely on names to know which material is applied to an object. A material library with material previews would be a tremendous help to appleseed.studio users.

A prerequisite to this project (which may be taken care of by the mentor) is to enable import/export of all material types from/to files. This is already implemented for Disney materials (which is one particular type of materials in appleseed); this support should be extended to all types of materials.

Ideally, the material library/browser would have at least the following features:

  • Show a visual collection of materials, with previews of each materials and some metadata (name, author, date and time of creation)
  • Allow filtering materials based on metadata
  • Allow to drag and drop a material onto an object instance
  • Highlight the material under the mouse cursor when clicking in the scene (material picking)
  • Allow replacing an existing material from the project by a material from the library

Possible follow-ups:

  • Allow for adding "material library sources" (e.g., GitHub repositories) to a material library. This would allow people to publish and share their own collections of materials, in a decentralized manner.
  • Integrate the material library from appleseed.studio into appleseed plugins for 3ds Max and Maya.

Project 13: Render history and render comparisons

  • Required skills: C++, Qt
  • Challenges: None in particular
  • Primary mentor: Franz
  • Secondary mentor: Esteban

Finding the right trade-off between render time and quality, lighting setup, or material parameters can involve a large number of "proof" renders. A tool that would saving a render to some kind of render history and compare renders from the history would be very helpful to both users and developers.

When saved to the history, renders should be tagged with the date and time of the render, total render time, render settings, and possibly even the rendering log. In addition, the user should be able to attach comments to renders.

Task:

  1. Build the user interface for the render history.
  2. Allow for saving a render to the history.
  3. Allow simple comparisons between two renders from the histoy.
  4. Allow comparing a render from the history with the current render (even during rendering).
  5. Allow for saving the render history along with the project.

Possible follow-ups:

  1. Add more comparison modes (side-by-side, toggle, overlay, slider).
  2. Add a way to attach arbitrary comments to renders.

Medium Difficulty

Project 14: Python scripting

  • Required skills: C++, Python, Qt
  • Challenges: Impacts the build system and the deployment story
  • Primary mentor: Esteban
  • Secondary mentor: Franz

Integrating a Python interpreter will allow users of appleseed.studio to use scripting to generate, inspect, and edit scenes, and to customize the application for their specific needs.

In addition it will allow future parts of appleseed.studio to be written in Python, speeding up the development process and making it easier to contribute.

Tasks:

  1. Integrate a Python interpreter in appleseed.studio.
  2. Import appleseed.python at application startup and write code to make the currently open scene in appleseed.studio accessible from Python.
  3. Add a basic script editor and console widget to appleseed.studio using Qt.
  4. Allow limited customization of appleseed.studio (such as custom menu items) in Python.

Hard Difficulty

Project 15: OpenColorIO support

  • Required skills: C++, Qt, OpenGL
  • Challenges: Impacts the build system
  • Primary mentor: Esteban
  • Secondary mentor: Franz

OpenColorIO (OCIO) is an open source project from Sony Pictures Imageworks. Support for OCIO in appleseed.studio would allow the user to adjust gamma, exposure, and to transform colors of the rendered images using 3D LUT in real time.

More details can be found in this issue.

Tasks:

  1. Add OpenGL to the build.
  2. Write an OpenGL-based render widget.
  3. Introduce a second render buffer:
  • The first buffer holds the original, non-corrected render;
  • The second buffer holds the color-corrected render and is displayed on screen.
  1. Add OpenColorIO to the build.
  2. Apply color correction to the second buffer.

Possible follow-ups:

  1. Add UI widgets to appleseed.studio to allow for adjusting gamma and exposur.
  2. Add UI widgets to appleseed.studio to allow for choosing the color profile to apply.
  3. Add UI widgets to appleseed.studio to allow for loading custom 3D LUTs.

appleseed-max Projects

Medium Difficulty

Project 16: appleseed camera

  • Required skills: C++, Win32
  • Challenges: Using the 3ds Max API
  • Primary mentor: Franz
  • Secondary mentor: Esteban

Currently, the appleseed plugin for Autodesk 3ds Max translates native 3ds Max cameras into appleseed cameras when rendering begins or when the scene is exported. While this works and is enough for simple scenes, native 3ds Max camera is lacking many features that are required for realistic renderings, such as depth of field and custom bokeh shapes.

The idea of this project is to allow 3ds Max users to instantiate, and manipulate, an appleseed camera entity that exposes all functionalities of appleseed's native cameras.

The principal difficulty of this project will be to determine how to write a camera plugin for 3ds Max as there appears to be little documentation on the topic. One alternative to creating a new camera plugin would be to simply inject appleseed settings into the 3ds Max camera's user interface, if that's possible. Another alternative solution would be to place appleseed camera settings in appleseed's render settings panel.

  1. Determine how to create a camera plugin in 3ds Max. If necessary, ask on CGTalk's 3dsMax SDK and MaxScript forum.
  2. Expose settings from native appleseed cameras.

Possible follow-ups:

  1. Add user interface widgets to adjust depth of field settings.

Hard Difficulty

Project 17: Interactive rendering

  • Required skills: C++, Win32
  • Challenges: Using the 3ds Max API
  • Primary mentor: Franz
  • Secondary mentor: Esteban

At the moment, appleseed plugin for Autodesk 3ds Max only supports "tiled" or "final" rendering mode of 3ds Max: you hit render, render starts and begins to appear tile after tile. Meanwhile, 3ds Max does not allow any user input apart from a way to stop the render.

With the vast amount of computational power in modern workstations, it is now possible to render a scene interactively, while moving the camera, manipulating objects and light sources or modifying material parameters.

In fact, appleseed and appleseed.studio already both have native support for interactive rendering.

The goal of this project is to enable interactive appleseed rendering inside 3ds Max via 3ds Max's ActiveShade functionality. Implementing interactive rendering in appleseed should be even easier with 3ds Max 2017 due to new APIs.

  1. Investigate the ActiveShade API.
  2. Investigate the new API related to interactive rendering in 3ds Max 2017.
  3. Decide whether 3ds Max 2015 and 3ds Max 2016 can, or should, be supported.
  4. Implement a first version of interactive rendering limited to camera movements.
  5. Add support for live material and light adjustements.
  6. Add support for object movements and deformations.

New Standalone Tools

Medium Difficulty

Project 18: Small standalone render viewer

  • Required skills: C++, Qt
  • Challenges: Building a new tool from scratch
  • Primary mentor: Esteban
  • Secondary mentor: Franz

The goal of this project is to implement a small standalone render viewer tool. The tool would communicate with the renderer using sockets and would display the image currently being rendered.

The viewer would be handy for appleseed users and also could be integrated in future versions of our integration plugins with Maya and 3ds Max.

Tasks:

  1. Design a protocol that the renderer and the viewer would use to communicate.
  2. Write a tile callback class that sends appleseed image data to the viewer.
  3. Display the image as it is being rendered in the viewer.

Possible follow-ups:

  1. Add basic image pan and zoom controls.
  2. Implement simple tone mapping operators.
  3. Implement basic support for LUTs or OpenColorIO color management.

Hard Difficulty

Project 19: Denoiser

  • Required skills: C++, reading and understanding complicated scientific papers
  • Challenges: Building a new tool from scratch
  • Primary mentor: Esteban
  • Secondary mentor: Franz

Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings
(Illustration from Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings.)

appleseed, like other modern renderers, uses Monte Carlo methods to render images. Monte Carlo methods are based on repeated random sampling, which leads to visual noise, or grain. Getting rid of the noise requires a large number of samples and, consequently, can lead to extremely long render times for noise-free images (up to days for a single image).

The objective of this project is to write a denoiser tool based on image processing techniques (NL-Means or similar algorithms). This tool would allow for rendering images with a manageable number of samples in a reasonable amount of time. These images would then be "denoised" in order to obtain smooth images.

Possible references:

Also, here is a good video introduction to denoising by Vladimir Koylazov from Chaos Group (makers of the V-Ray renderer): https://www.youtube.com/watch?v=UrOtyaf4Zx8

Lots of interesting paper references near the end of the talk!

Tasks:

  1. Modify appleseed to output auxiliary images needed by the denoising algorithm.
  2. Implement the denoiser as a shared library that can be reused.
  3. Implement a command line driver program that uses the shared library.

Possible follow-ups:

  1. Use motion vectors to extend denoising to the temporal domain.
Clone this wiki locally