Permalink
Browse files

New post: "PIC/FLIP Simulator Meshing Pipeline"

  • Loading branch information...
1 parent 5d0d00f commit f8b90e83cec5d0689d55b6eef29b56fc56e1b110 @betajippity committed Mar 3, 2014
@@ -19,37 +19,37 @@ Since the holiday card had to be just a single still frame and had to be done in
I started by creating a handful of different base conifer tree models in OnyxTree and throwing them directly into Maya/Vray (this was before I had even started working on Takua Render) just to see how they would look. Normally models directly out of OnyxTree need some hand-sculpting and tweaking to add detail for up-close shots, but here I figured if they looked good enough, I could skip those steps. The result looked okay enough to move on:
-[![](/content/images/2013/Nov/basic_trees.jpg)](/content/images/2013/Nov/basic_trees.jpg)
+[![]({{site.url}}/content/images/2013/Nov/basic_trees.jpg)]({{site.url}}/content/images/2013/Nov/basic_trees.jpg)
The textures for the bark and leaves were super simple. To make the bark texture's diffuse layer, I pulled a photograph of bark off of Google, modified it to tile in Photoshop, and adjusted the contrast and levels until it was the color I wanted. The displacement layer was simply the diffuse layer converted to black and white and with contrast and brightness adjusted. Normally this method won't work well for up close shots, but again, since I knew the shot would be far away, I could get away with some cheating. Here's a crop from the bark textures:
-[![](/content/images/2013/Nov/bark.png)](/content/images/2013/Nov/bark.png)
+[![]({{site.url}}/content/images/2013/Nov/bark.png)]({{site.url}}/content/images/2013/Nov/bark.png)
The pine needles were also super cheatey. I pulled a photo out of one of my reference libraries, dropped an opacity mask on top, and that was all for the diffuse color. Everything else was hacked in the leaf material's shader; since the tree would be far away, I could get away with basic transparency instead of true subsurface scattering. The diffuse map with opacity flattened to black looks like this:
-[![](/content/images/2013/Nov/pineleaves.png)](/content/images/2013/Nov/pineleaves.png)
+[![]({{site.url}}/content/images/2013/Nov/pineleaves.png)]({{site.url}}/content/images/2013/Nov/pineleaves.png)
With the trees roughed in, the next problem to tackle was getting snow onto the trees. Today, I would immediately spin up Houdini to create this effect, but back then, I didn't have a Houdini license and hadn't played with Houdini enough to realize how quickly it could be done. Not knowing better back then, I used 3dsmax and a plugin called [Snowflow](http://www.zwischendrin.com/en/detail/261) (I used the demo version since this project was a one-off). To speed up the process, I used a simplified, decimated version of the tree mesh for Snowflow. Any inaccuracies between the resultant snow layer and the full tree mesh were acceptable, since they would look just like branches and leaves poking through the snow:
-[![](/content/images/2013/Nov/snowflow.jpg)](/content/images/2013/Nov/snowflow.jpg)
+[![]({{site.url}}/content/images/2013/Nov/snowflow.jpg)]({{site.url}}/content/images/2013/Nov/snowflow.jpg)
I tried a couple of different variations on snow thickness, which looked decent enough to move on with:
-[![](/content/images/2013/Nov/snowtest.jpg)](/content/images/2013/Nov/snowtest.jpg)
+[![]({{site.url}}/content/images/2013/Nov/snowtest.jpg)]({{site.url}}/content/images/2013/Nov/snowtest.jpg)
The next step was a fast snow material that would look reasonably okay from a distance, and render quickly. I wasn't sure if the snow should have a more powdery, almost diffuse look, or if it should have a more refractive, frozen, icy look. I wound up trying both and going with a 50-50 blend of the two:
-[![From left to right: refractive frozen ice, powdery diffuse, 50-50 blend](/content/images/2013/Nov/snowmaterialtest.png)](/content/images/2013/Nov/snowmaterialtest.png)
+[![From left to right: refractive frozen ice, powdery diffuse, 50-50 blend]({{site.url}}/content/images/2013/Nov/snowmaterialtest.png)]({{site.url}}/content/images/2013/Nov/snowmaterialtest.png)
The next step was to compose a shot, make a very quick, simple lighting setup, and do some test renders. After some iterating, I settled for this render as a base for comp work:
-[![](/content/images/2013/Nov/test4.png)](/content/images/2013/Nov/test4.png)
+[![]({{site.url}}/content/images/2013/Nov/test4.png)]({{site.url}}/content/images/2013/Nov/test4.png)
The base render is very blueish since the lighting setup was a simple, grey-blueish dome light over the whole scene. The shadows are blotchy since I turned Vray's irradiance cache settings all the way down for faster rendertimes; I decided that I would rather deal with the blotchy shadows in post and have a shot at making the deadline rather than wait for a very long rendertime. I wound up going with the thinner snow at the time since I wanted the trees to be more recognizable as trees, but in retrospect, that choice was probably a mistake.
The final step was some basic compositing. In After Effects, I applied post-processed DOF using a z-depth layer and Frischluft, color corrected the image, cranked up the exposure, and added vignetting to get the final result:
-[![](/content/images/2013/Nov/card.jpg)](/content/images/2013/Nov/card.jpg)
+[![]({{site.url}}/content/images/2013/Nov/card.jpg)]({{site.url}}/content/images/2013/Nov/card.jpg)
Looking back on this project two years later, I don't think the final result looks really great. The image looks okay for two days of rushed work, but there is enormous room for improvement. If I could go back and change one thing, I would have chosen to use the much heavier snow cover version of the trees for the final composition. Also, today I would approach this project very very differently; instead of ping-ponging between multiple programs for each component, I would favor a almost pure-Houdini pipeline. The trees could be modeled as L-systems in Houdini, perhaps with some base work done in Maya. The snow could absolutely be simmed in Houdini. For rendering and lighting, I would use either my own Takua Render or some other fast physically based renderer (Octane, or perhaps Renderman 18's iterative pathtracing mode) to iterate extremely quickly without having to compromise on quality.
@@ -0,0 +1,42 @@
+---
+layout: post
+title: PIC/FLIP Simulator Meshing Pipeline
+tags: [Coding, Fluid Simulator, Project Ariel]
+author: Yining Karl Li
+---
+
+In my last post, I gave a summary of how the core of my new PIC/FLIP fluid simulator works and gave some thoughts on the process of building OpenVDB into my simulator. In this post I'll go over the meshing and rendering pipeline I worked out for my simulator.
+
+Two years ago, when my friend [Dan Knowlton](http://www.danknowlton.com/) and I built our semi-Lagrangian fluid simulator, we had an immense amount of trouble with finding a good meshing and rendering solution. We used a standard marching cubes implementation to construct a mesh from the fluid levelset, but the meshes we wound up with had a lot of flickering issues. The flickering was especially apparent when the fluid had to fit inside of solid boundaries, since the liquid-solid interface wouldn't line up properly. On top of that, we rendered the fluid using Vray, but relied on a irradiance map + light cache approach that wasn't very well suited for high motion and large amounts of refractive fluid.
+
+This time around, I've tried to build a new meshing/rendering pipeline that resolves those problems. My new meshing/rendering pipeline produces stable, detailed meshes that fit correctly into solid boundaries, all with minimal or no flickering. The following video is the same "dambreak" test from my previous test, but fully meshed and rendered using Vray:
+
+<div class='embed-container'><iframe src='https://player.vimeo.com/video/87050516' frameborder='0'>PIC/FLIP Simulator Dam Break Test- Final Render</iframe></div>
+
+One of the main issues with the old meshing approach was that marching cubes was run directly on the same level set we were using for the simulation, which meant that the resolution of the final mesh was effectively bound to the resolution of the fluid. In a pure semi-Lagrangian simulator, this coupling makes sense, however, in a PIC/FLIP simulator, the resolution of the simulator is dependent on the particle count and not the projection step grid resolution. This property means that even on a simulation with a grid size of 128x64x64, extremely high resolution meshes should be possible if there are enough particles, as long as a level set was constructed directly from the particles completely independently of the projection step grid dimensions.
+
+Fortunately, OpenVDB comes with an enormous toolkit that includes tools for constructing level sets from various type of geometry, including particles, and tools for adaptive level set meshing. OpenVDB also comes with a number of level set operators that allow for artistic tuning of level sets, such as tools for dilating, eroding, and smoothing level set. At the SIGGRAPH 2013 OpenVDB course, [Dreamworks had a presentation](http://www.openvdb.org/download/openvdb_dreamworks.pdf) on how they used OpenVDB's level set operator tools to extract really nice looking, detailed fluid meshes from relatively low resolution simulations. I also integrated Walt Disney Animation Studios' [Partio](http://www.disneyanimation.com/technology/partio.html) library for exporting particle data to standard formats so that I could get particles, level sets, and meshes.
+
+[![Zero adaptive meshing (on the left) versus adaptive meshing with 0.5 adaptivity (on the right). Note the significantly lower poly count in the adaptive meshing, but also the corresponding loss of detail in the mesh.]({{site.url}}/content/images/2014/Feb/adaptivemeshing.png)]({{site.url}}/content/images/2014/Feb/adaptivemeshing.png)
+
+I started by building support for OpenVDB's adaptive level set meshing directly into my simulator and dumping out OBJ sequences straight to disk. In order to save disk space, I enabled fairly high adpativity in the meshing. However, upon doing a first render test, I discovered a problem: since OpenVDB's adaptive meshing optimizes the adaptivity per frame, the result is not temporally coherent with respect to mesh resolution. By itself this property is not a big deal, but it makes reconstructing temporally coherent normals difficult, which can contribute to flickering in final rendering. So, I decided that disk space was not as big deal and just disabled adaptivity in OpenVDB's meshing for smaller simulations; in sufficiently large sims, the scale of the final render more often than not will make normal issues far less important and disk resource demands become much greater, so the tradeoffs of adaptivity become more worthwhile.
+
+The next problem was getting a stable, fitted liquid-solid interface. Even with a million particles and a 1024x512x512 level set driving mesh construction, the produced fluid mesh still didn't fit the solid boundaries of the sim precisely. The reason is simple: level set construction from particles works by treating each particle as a sphere with some radius and then unioning all of the spheres together. The first solution I thought of was to dilate the level set and then difference it with a second level set of the solid objects in the scene. Since Houdini has full OpenVDB support and I wanted to test this idea quickly with visual feedback, I prototyped this step in Houdini instead of writing a custom tool from scratch. This approach wound up not working well in practice. I discovered that in order to get a clean result, the solid level set needed to be extremely high resolution to capture all of the detail of the solid boundaries (such as sharp corners). Since the output levelset from VDB's difference operator has to match the resolution of the highest resolution input, that meant the resultant liquid level set was also extremely high resolution. On top of that, the entire process was extremely slow, even on smaller grids.
+
+[![The mesh on the left has a cleaned up, stable liquid-solid interface. The mesh on the right is the same mesh as the one on the left, but before going through cleanup.]({{site.url}}/content/images/2014/Feb/edgecleanup.png)]({{site.url}}/content/images/2014/Feb/edgecleanup.png)
+
+The solution I wound up using was to process the mesh instead of the level set, since the mesh represents significantly less data and at the end of the day the mesh is what we want to have a clean liquid-solid interface. The solution is from every vertex in the liquid mesh, raycast to find the nearest point on the solid boundary to each vertex (this can be done either stochastically, or a level set version of the solid boundary can be used to inform a good starting direction). If the closest point on the solid boundary is within some epsilon distance of the vertex, move the vertex to be at the solid boundary. Obviously, this approach is far simpler than attempting to difference level sets, and it works pretty well. I prototyped this entire system in Houdini.
+
+For rendering, I used Vray's ply2mesh utility to dump the processed fluid meshes directly to .vrmesh files and rendered the result in Vray using pure brute force pathtracing to avoid flickering from temporally incoherent irradiance caching. The final result is the video at the top of this post!
+
+Here are some still frames from the same simulation. The video was rendered with motion blur, these stills do not have any motion blur.
+
+[![]({{site.url}}/content/images/2014/Feb/dambreak.0105.png)]({{site.url}}/content/images/2014/Feb/dambreak.0105.png)
+
+[![]({{site.url}}/content/images/2014/Feb/dambreak.0149.png)]({{site.url}}/content/images/2014/Feb/dambreak.0149.png)
+
+[![]({{site.url}}/content/images/2014/Feb/dambreak.0200.png)]({{site.url}}/content/images/2014/Feb/dambreak.0200.png)
+
+[![]({{site.url}}/content/images/2014/Feb/dambreak.0236.png)]({{site.url}}/content/images/2014/Feb/dambreak.0236.png)
+
+[![]({{site.url}}/content/images/2014/Feb/dambreak.0440.png)]({{site.url}}/content/images/2014/Feb/dambreak.0440.png)
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit f8b90e8

Please sign in to comment.