This is a working collection of tools to model the appearance of fabrics. Most of the code is based on Mitsuba and we occasionally use Blender to visualize things.
We fit ASGs to the per point visibility function and discarded light paths that reached an invisible region. This experiment is a bit hacky and needs to be improved.
@article{wu2011physically,
title={Physically-based interactive bi-scale material design},
author={Wu, Hongzhi and Dorsey, Julie and Rushmeier, Holly},
journal={ACM Transactions on Graphics (TOG)},
volume={30},
number={6},
pages={1--10},
year={2011},
publisher={ACM New York, NY, USA}
}
@article{jimenez2016practical,
title={Practical real-time strategies for accurate indirect occlusion},
author={Jim{\'e}nez, Jorge and Wu, Xianchun and Pesce, Angelo and Jarabo, Adrian},
journal={SIGGRAPH 2016 Courses: Physically Based Shading in Theory and Practice},
year={2016}
}
@inproceedings{zhu2023realistic,
title={A Realistic Surface-based Cloth Rendering Model},
author={Zhu, Junqiu and Jarabo, Adrian and Aliaga, Carlos and Yan, Ling-Qi and Chiang, Matt Jen-Yuan},
booktitle={ACM SIGGRAPH 2023 Conference Proceedings},
pages={1--9},
year={2023}
}
Update: Added Cylinder model and comparison with actual
One of the microstructures is like 300 micron across (the black one, measured perpendicular to the highlight). This is not very accurate because if we rotate warp and weft, then we don't see the same effect.
Real | Microscope | Render |
---|---|---|
Update: Added the BRDF from Woven Fabric Capture From a Single Photo by Jin et al.
Update: We can do Delta Transmission also now!!
SpongeCake (see citation below) is a popular BSDF that has been used a lot for cloth. I have a work-in-progress implementation in spongecake_bsdf.py
.
In the image below, each row is for a different alpha
, their roughness parameter, taking values 0.1, 0.5, 1.0
. Each column is for a different optical_depth
, taking values 1.0, 3.0, 5.0
. Please note that we use the surface version of the SGGX distribution function i.e. S = diag([alpha**2, alpha**2, 1])
here.
(Why is the shading gone in the last row!!!)
For comparison, this is their render (from Fig. 8).
Note that what we call optical_depth
is really the product T\rho
from their equations. This is what they are calling just T
in their figures.
We redid the same image using the fiber version of the SGGX distribution function i.e. S = diag([1, 1, alpha**2])
.
For comparison, the following figure is from the original paper:
Note that qualitatively, we get some fuzz that can be seen in their renders. This is promising.
@article{wang2022spongecake,
title={Spongecake: A layered microflake surface appearance model},
author={Wang, Beibei and Jin, Wenhua and Ha{\v{s}}an, Milo{\v{s}} and Yan, Ling-Qi},
journal={ACM Transactions on Graphics (TOG)},
volume={42},
number={1},
pages={1--16},
year={2022},
publisher={ACM New York, NY}
}
Here, we show our implementation of the SGGX distribution function and contrast it with the one in the original paper (see citation below).
Surface | Fiber |
---|---|
Original:
Note that we get the same bands around the equator for fiber
and peaks at the poles for surface
. This is a sanity check that we are on the right track.
@article{heitz2015sggx,
title={The SGGX microflake distribution},
author={Heitz, Eric and Dupuy, Jonathan and Crassin, Cyril and Dachsbacher, Carsten},
journal={ACM Transactions on Graphics (TOG)},
volume={34},
number={4},
pages={1--11},
year={2015},
publisher={ACM New York, NY, USA}
}
We also have an implementation of the Anisotropic Spherical Gaussians which we use to fit the visibility function. Compare our implementation with the original paper below. LGTM!
A | B | C |
---|---|---|
@article{xu2013anisotropic,
title={Anisotropic spherical gaussians},
author={Xu, Kun and Sun, Wei-Lun and Dong, Zhao and Zhao, Dan-Yong and Wu, Run-Dong and Hu, Shi-Min},
journal={ACM Transactions on Graphics (TOG)},
volume={32},
number={6},
pages={1--11},
year={2013},
publisher={ACM New York, NY, USA}
}
(UV map seems wrong)
(Why does this become black at low roughness and high thickness)
Teapots:
Actual Cloth Like Models:
I was expecting that at alpha = 1
, sampling from SGGX will be the same as sampling from a uniform sphere. This is not what is happening in practice. Compare the two rows below (first one is with uniform sphere sampling and the other is SGGX with alpha = 1
). They don't look the same. Funnily, neither of them look like the original paper either.
(Ok, they look the same when I leave D as it is and not divide it by 4).
The following is when I used the wrong frame for sampling. These experiments show that figuring out a principled way to do SGGX sampling will solve my problem.
The fact that shading is missing in my renders is a crucial point. I think this is a good handle for debugging since I can think about it reasonably.
I may be missing the diffuse ??? Not sure. There is some stuff on this in the SGGX paper as well as https://github.com/taqu/Microflake-SGGX/blob/master/microflake_sggx.cpp. Just don't understand the theory well enough at the moment.
- How do you actually sample the
attenuation
for the multi-layered, single-scattering SpongeCake model? - Is the orientation map the same as a tangent map?