Skip to content

@DavidBluecame DavidBluecame released this Mar 14, 2017 · 29 commits to master since this release

*   IMPORTANT: Path/Photon OneDirectLight - attempt to sample lights more uniformly

    As reported in http://www.yafaray.org/node/803 there are artifacts in the path tracing and photon mapping algorithms when there is more than one light.

    I decided to change the input data and use an almost purely correlative numbering for each sample (having separate correlative numbers, one per thread). This seems to improve the noise and still give similar results no matter how many lights are used, and a more uniform light sampling.

    This is a very significant change. I would expect scenes to be a little bit noisier now, but having a much more correct lighting when there is more than one light in the scene.

    In any case we need to keep an eye on this and perhaps fine tune it more or even revert it back completely and look for another solution. For now I will leave it this way and see what happens...

*   IMPORTANT: all integrators, SPPM and path roulette: Fixing non-randomness repetitive patterns
    As described in http://www.yafaray.org/node/792 I discovered lack of randomness in the bidirectional integrator.

    I found out that it was caused by an insufficiently random "seeding" of the random_t pnrg object. For example during first AA pass, offset = 0 so it the pnrg object was seeded with value "123" for each tile causing the artifact detected in the bidir integrator.

    However, and even when it's not obviously visible, I also believe it affected the SPPM integrator as well because of the same issue. Therefore I proceeded to add more randomness to the pnrg seeding for all integrators, including SPPM. I hope this also improves russian roulette randomness.

    This change should not cause ill effects and should be beneficial in my opinion, but it's difficult to know for sure. We need to keep an eye on this to ensure no new issues happen now.

*   Path Tracing: russian roulette for faster path renders controlled by parameter min_bounces

    * If this parameter is set to 0, russian roulette will be enabled.
    * If set to the same value specified in depth (max bounces), russian roulette will be disabled
    * If set to a value between 0 and max bounces, then russian roulette will only start be applied after this number of bounces, so we can get decent sampling in dark areas for example and get a good speedup with less noise.

    The lower this parameter is, the more speed and more noise.

    Path Tracing integrator: added new Russian Roulette parameter to speed up path tracing

*   *   IMPORTANT: big changes to textures interpolation changes to imagehandlers. Now all imagebuffers are internally linear and optimized by default.

    To solve the problem detected at http://yafaray.org/node/787 where I found out that YafaRay is doing the texture interpolation *after* decoding the texels color space. This was causing significant differences in color between standard bilinear/bicubic and when using trilinear or EWA mipmaps.

    I believe that the correct way to interpolate textures is to decode color space into linear space every time a texel is read, before the texels are used for interpolation. That way the interpolation is calculated correctly in linear color space.

    From now on the user will be responsible for selecting correct ColorSpaces for all textures, including bump map, normal map, etc. For example for Non-RGB / Stencil / Bump / Normal maps, etc, textures are typically already linear and the user should select "linearRGB" in the texture properties, but if the user (by mistake) keeps the default sRGB for them, YafaRay will (incorrectly) apply the sRGB->LinearRGB conversion causing the values to be incorrect. However, I've added a "fail safe" so for any "float" textures, bump maps, normal maps, etc, when getting colors after interpolatio YafaRay will to a "inverse" color conversion to the original Color Space. This way, even a mistake in user's color space selection in bump maps, normal maps, etc, will not cause any significant problems in the image as they will be converted back to their original color space. However, in this case rendering will be slower and potential artifacts can appear due to interpolation taking place in the wrong color space. For optimal results, the user must select correctly the color space for all textures.

*   Textures will be "optimized" by default. I think it's clear by now that optimized textures greatly improve memory usage and apparently don't cause slowdowns (might even make it slightly faster due to reduced RAM access?)

*   Fixed uninitialized values generated by Ambient Occlusion sampling

*   IMPORTANT: Initial support for Texture Mipmaps / Ray Differentials. BIG changes to ImageHandlers
     * Modifier all ImageHandlers to standardise access and make them more flexible
     * Added new Grayscale internal buffers (optional)
     * Reorganized all Interpolation and GetColor code
     * Added MipMap capability to ImageHandlers
     * Added Trilinear MipMap interpolation based on Ray Differentials.
     * Added EWA MipMap interpolation based on Ray Differentials
     * Heavily modified IBL blur function, to use mipmaps with manually calculated mipmap level instead of the previous dedicated "IBL blur" process

    All these changes are SIGNIFICANT and could cause new bugs... we have to closely monitor

*   Fixed EXR MultiLayer image file saving

*   Angular camera: fixed wrong renders due to incorrect default clipping
    As stated in http://yafaray.org/node/779 there was a problem in the Angula Camera rendering. In many cases, the background was shown instead of the surrounding objects.
    I've found the problem in the default clipping plane calculation for the camera, when no clipping is supposed to happen. By default, the far clipping distance is set to -1.f, which normally allows rays to travel without clipping as they are not supposed to go behind the camera.
    However for angular cameras, rays can actually go behind the camera when using wide angles. In those cases, the rays were incorrectly clipped at a distance of 1.f units behind the camera position.
    So, I've set a new default far clipping distance for the angular camera, using a very high negative value. I hope this will allow all rays (in front and behind the camera) to travel without clipping.

*   Image Texture Interpolation fixes
    Proposed fixes to solve the problems described in http://www.yafaray.org/node/783

    The fixes solve the problem with the strange stripes at top and left of the texture.

    Also, the changes implement extra code to take into account texture edge interpolation differently when:
    * Texture is alone (clip), extended or checkered. In this case, the edges are interpolated against themselves
    * Texture is repeated. In this case the way the edges are interpolated depends on the MirrorX/MirrorY parameters. Depending on these params, the edges are interpolated against the opposite edge (normal) or the same edge (mirrored).

*   Texture mapping: allow MirrorX,MirrorY even when Repeat = 1

*   Added building instructions for several platforms and scenarios

*   Qt support reintroduced and updated for YafaRay v3, but still in a basic state, many features not available yet for the Qt interface

*   CMake/Swig: modified Ruby interface to avoid Blender crash with Ruby bindings enabled

*   CMake: IMPORTANT change/updates to the building system. Versioning integration with Git. Standardisation of paths. Re-introduction of standalone builds and improvements to runtime search of libraries.
    Several changes have been introduced to the CMake building system to:
    * Integrate versioning with Git for automatic versioning of builds based on the current Git tags.
    * Standardisation of installation paths for different OS. Now, the plugins will always be installed in the folder "yafaray-plugins", no matter if it's installed as a pure Core release to the bin/lib system folders or as a Blender add-on.
    * Removed the automatic Blender-Exporter git deletion+download every time Core was built as a Blender add-on. This made rebuilds and tests very slow and cumbersome. Also, using "rm -rf" in the CMake building process is a risky and non-portable process. Now, the developer *must* download manually the Blender-Exporter using git and in the UserConfig.txt file, the path to the Blender-Exporter code *must* be specified. It's a little more inconvenient the first time, but I think it's much more convenient for subsequent rebuilds and tests.
    * Added CMake option for searching for libraries in an alternate location.
    * Removed findOpenCV CMake module that was not working too well (old, outdated maybe?). Without it, it seems finding the OpenCV files is easier in general. However in some cases it might be necessary to set the path to the OpenCV libraries manually.
    * For MacOSX, added option to select the Framework in the UserConfig.txt file for convenience
    *yafaray-xml: better autodetection of plugins path, but in some cases "-pp" may still be needed

*   Fix bug that caused many extra render passes to be generated in some cases

*   Fix for SPPM sudden brightness change when reaching approx 4,300 million photons
    As reported in http://www.yafaray.org/node/772  when SPPM accumulates a total of 4,300 million photons approximately, the scene brightness changes very suddenly.
Assets 7
You can’t perform that action at this time.