Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VFX Enhance: more image layers #576

Open
zebastian opened this issue Oct 27, 2018 · 55 comments
Open

VFX Enhance: more image layers #576

zebastian opened this issue Oct 27, 2018 · 55 comments
Assignees
Milestone

Comments

@zebastian
Copy link
Collaborator

see feature request here by stilikon:
https://fractalforums.org/mandelbulber/14/mandelbulber-vfx-feature-ideas/2042

  • AOVs / Enable to export more image layers then Color, Normal, and ZDepth.

For compositing it is crucial to have different layers (AOVs) from the render.
As I understand right these layers must be present internally of Mandelbulber anyways.
It would be great to be able to export them seperately.

The most important here would be a "World Position" pass. (This would enable us to do a complete relighting in compositing)
Other very usefull ones:
Diffuse, Specular, Ambient Occlusion, Emission, Fog

@zebastian zebastian self-assigned this Oct 27, 2018
@zebastian zebastian added this to the 2.16 milestone Oct 27, 2018
@adrianmeyerart
Copy link
Contributor

Hey guys,
is there any news on this?

This "World Position" image layer would be such a gamechanger due to relighting possibilities.

@ghost ghost modified the milestones: 2.16, 2.19 May 13, 2019
@adrianmeyerart
Copy link
Contributor

?

@zebastian
Copy link
Collaborator Author

Hi @MeyerAdrian, just to be sure i understand the concept of world position correctly:
You want to have a new image pass, where the color of the pixel determines the position in 3d space, so for example when inside the image there is the absolute point [1,3,2] This point is colored in a specific color which is unique for this triple?
--> Like a globalized z-buffer image?
Do you have some documentation / ressources on what are the best practices for this (what dimension goes in what color channewl ) ?

@adrianmeyerart
Copy link
Contributor

Hi,
thanks for getting back to me!
Sure...

Yes, I mean another image pass just like the current Normal Pass.
The normal pass is a RGB pass where you take the Shading Surface Normal Vector (XYZ) of a pixel and translate that into a RGB representing that surface direction of the object at that pixel.

World Position Pass is very similar.
You sample the World Position Vector (XYZ) at the given shading point and translate that into RGB as well. So what you get is, for each pixel in the image, you know exactly where in 3D space that pixel belonged in the original 3D scene.
World meaning here, during the translation into RGB values you do not convert the position vectors into camera space or something, you just leave them as they are, in scene coordinate space.

Combined with the normal pass, that allows in Compositing (like in Nuke) to basically completely recreate the scene in 3D space and relight it and do all sorts of cool stuff with it.
Also it makes it possible to some extend to combine the fractals with other regular 3D geometry / renderings.

It looks something like this for a cube:
ppassinternal

Searched for an example implementation in blender, but they solved it in another way.
But here it is from Appleseed OpenSource Renderer.
I think basically its really just something like this:

const Vector2i& pi = pixel_context.get_pixel_coords();

float* out =
    reinterpret_cast<float*>(
        m_tile->pixel(
            pi.x - m_tile_origin_x,
            pi.y - m_tile_origin_y));

if (shading_point.hit_surface())
{
    const Vector3d& p = shading_point.get_point();
    out[0] = static_cast<float>(p[0]);
    out[1] = static_cast<float>(p[1]);
    out[2] = static_cast<float>(p[2]);
}

Appleseed Position Pass Github

Thanks a lot so far!

PS.
While checking on this again, I encountered a Bug with the current Normal Pass in OpenCL Mode.
Probably this would be the same issue with a new Position Pass, so might be good to keep in mind.
I've opened a new Issue.
Normal Pass OpenCL Bug

@buddhi1980
Copy link
Owner

As I suppose the only format which can handle is EXR, which works with floating point numbers. With PNG it would need some special mapping to integer format and would be dramatic loose of coordinates resolution.

@adrianmeyerart
Copy link
Contributor

Yes,
.exr 16Bit half or mostly rather 32Bit float is the only useful format in this case. Same for Surface Normal Pass. Coordinate Resolution is very important here.
Softwares that can make sense out of these passes expect these formats anyways.

@zebastian
Copy link
Collaborator Author

@buddhi1980
If we take the trivial approach and let the x position (-FLOAT_MAX / 2, FLOAT_MAX / 2) go to R range (0,1) then the resolution in the pixels "around" 0 will be too poor. IMHO we always require a bounding box (Limits), which will mark the minimum and maximum color of the domain.
What do you think?
Quick example of the pass:
settings_specular

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Jun 13, 2019

Hey,
not sure if I understand correctly...
But it's actually very important not to normalize anything to bounding limits or so.
It should really be the raw 32bit float position values.
So if a pixel of the fractal belongs to a surface point with the XYZ position (-4.30, 1,32, -32,02) for example, RGB values of the pixel shoud be exactly the same, including negative values and values far above or below 1.

In the sample image I posted above, you can nicely see, that there is a black area in one corner, that is because this is the "-x, -y" corner, so values for x & y are negative, below 0. Showing up as black when displayed as a normal sRGB Image. But in compositing these negative values get picked up correctly.
Whereas in the "+x, -y" corner everything seems to be pure red, which is because R (X) values go far beyond 1, depending on the scene scale.

Should be exactly the same with the Surface Normal.

@zebastian
Copy link
Collaborator Author

Hi Adrian, ok i see.
This makes storing easier, when i can directly use the position as RGB.

I am not sure the image libraries will behave correctly, though when the pixels are out of range. But we will find out...

@adrianmeyerart
Copy link
Contributor

Hey,
yes that should luckily actually make it easier.

It only works with .EXR, of course not with .JPG or .PNG because they can't store negative floating point values but just 0-255 8Bit or 0-65536 for 16 bit .PNG.
But .EXR is perfectly suited for things like that and natively supports it, in CG all renderers render passes like this with negative values.

While you figure this out, could you maybe also check the Surface Normal Pass implementation again?
Because they should actually also be in a (-1 to 1) range in 32Bit float.
Showing the normal direction of the shaded surface point in world space. So its just a direction vector in RGB, again not mapped to any other range or coordinate space.
Here values don't go beyond a aplitude of -+1 though because a normal direction vector is normalized as the names suggests, so has a amplitude of 1, wheras Position values can be far above or below 1.

It should look something like upper right picture, and the Position pass like the lower right.

Right Mandelbulber does some kind of Object normals in 0-1 range, but with still that weird Blue value far above 1.
standard_normals_sheet
Also attaching the original .EXR.
standard_util_passes.zip

We had a discussion about that a while ago.
But maybe when just implementing it the same way as the Position Pass, the problems are gone.

Thanks so much, really appreciate!!!

@adrianmeyerart
Copy link
Contributor

PS.
I think its prob. best to disable the feature for non .exr output.
I mean you could remap to floats to 0-255 sRGB and clamp for .jpg and -png but thats more work and its pretty much useless anyways.

@zebastian
Copy link
Collaborator Author

@MeyerAdrian @buddhi1980
I added full support for the world position, see new image channels, see new channels p.X p.Y, p.Z
Screenshot from 2019-06-15 13-40-46
Interestingly: PNG / JPG / TIFF all handle "out of bounds" pixels without crashing, see png result:
settings_world

@adrianmeyerart
Copy link
Contributor

Awesome, thanks!!
Could you maybe attach a.zip with the .exr so I can have a look at it?

The Background Looks a bit funky :-P
Guess it would be cleaner if it was black, but if thats hard to do I hope it doesnt cause any issues.

Do you think you could revisit the Surfae Normal Pass with that approach again as I mentioned above?

Thanks a lot! looking forward to test this

@zebastian
Copy link
Collaborator Author

sure, here it is:
settings.zip

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Jun 15, 2019

thanks a lot!
looks great and values seem to be just as expected (-:

you exported as 16bit half float right? just to make sure could you also test 32bit full?
and could you maybe also test this with stereo equirectangular mode?
because thats the main thing where I need this to work.
(hope @buddhi1980 is going to make Position and Normal also work with OpenCL, otherwise these large VR renders will be veery slow :-P)

the funky artifacts are then just in the .png and .jpg etc. so I guess thats fine (-;
checking the normals again would be awesome. or should we start a sperate thread for this?

Thanks for implementing!

@ghost
Copy link

ghost commented Jun 15, 2019 via email

@adrianmeyerart
Copy link
Contributor

Omnidirectional Stereoscopic! Working on some VR? 🍄⚡️

Sorry don't really understand the question? (-;

@zebastian
Copy link
Collaborator Author

@MeyerAdrian here is mandedlbulb in equirectangular with all channels enabled and set to 32 bit:
https://we.tl/t-CQCTYAHEMu (was too big for github)
channels (type chlist):
A, 32-bit floating-point, sampling 1 1, plinear
B, 32-bit floating-point, sampling 1 1, plinear
G, 32-bit floating-point, sampling 1 1, plinear
R, 32-bit floating-point, sampling 1 1, plinear
Z, 32-bit floating-point, sampling 1 1, plinear
d.B, 32-bit floating-point, sampling 1 1, plinear
d.G, 32-bit floating-point, sampling 1 1, plinear
d.R, 32-bit floating-point, sampling 1 1, plinear
n.X, 32-bit floating-point, sampling 1 1, plinear
n.Y, 32-bit floating-point, sampling 1 1, plinear
n.Z, 32-bit floating-point, sampling 1 1, plinear
p.X, 32-bit floating-point, sampling 1 1, plinear
p.Y, 32-bit floating-point, sampling 1 1, plinear
p.Z, 32-bit floating-point, sampling 1 1, plinear
s.X, 32-bit floating-point, sampling 1 1, plinear
s.Y, 32-bit floating-point, sampling 1 1, plinear
s.Z, 32-bit floating-point, sampling 1 1, plinear
compression (type compression): zip, multi-scanline blocks
dataWindow (type box2i): (0 0) - (1199 1199)

@zebastian
Copy link
Collaborator Author

@buddhi1980 I have two questions:

  1. Is diffuse the color channel? If so, i can reimplement to get the float precision color channel...
  2. what about the surface normals, should i change the mapping to what adrian suggested, or is the other format better? (since the other format also may be a production standard)

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Jun 15, 2019

32Bit equirectangluar looks perfect!
Awesome! Thanks so much, such a gamechanger for me.

@buddhi1980 I have two questions:

  1. Is diffuse the color channel? If so, i can reimplement to get the float precision color channel...

Ah would this maybe fix these issues that the Color Channel looks so different with 32Bit Exr opposed to 8Bit Jpg as I mentioned here? That would be awesome.
https://fractalforums.org/mandelbulber/14/mandelbulber-2-14-32-bit-exr-bugs/2016
debug_linear_exr_s2_15
debug_linear_exr_s2_15_2

  1. what about the surface normals, should i change the mapping to what adrian suggested, or is the other format better? (since the other format also may be a production standard)

Production standard is definitely the way I suggested.
All apps like Nuke After Effects etc. expect it that way.
But you might also just add it as an extra Layer, so Object Space Normals (that you already have) and World Space Normals so nobody will miss it (-;

@buddhi1980
Copy link
Owner

@zebastian

  1. diffuse is a just color of surface (without any shades). Now it is integer. It doesn't make sense to make it with floating point resolution, because benefits will be so low and memory usage will grow.
    This channel now is used to apply SSAO effect.
  2. About surface normal, we can add option to select between modes. Now normal vectors are oriented based on screen space (that's why z is always > 0). In second mode they can be global.
    By the way I have to check why Adrian observed values greater than 1.0.

@buddhi1980
Copy link
Owner

@zebastian, General remark about image channels. As I see in the code they are stored in 3 buffers in the same time. It consumes a lot of memory. It would be enough to store then on float accuracy buffer and convert to 8-bit or 16-bit only when buffer is being saved. Conversion is very fast.

@adrianmeyerart
Copy link
Contributor

@buddhi1980

  1. About surface normal, we can add option to select between modes. Now normal vectors are oriented based on screen space (that's why z is always > 0). In second mode they can be global.
    By the way I have to check why Adrian observed values greater than 1.0.

That reminds me that the current mode should probably be called "Screen Space Normals" (not Object Space as I wrote earlier, thats misleading) and the new mode I suggested "World Space Normals".
Cool, would be great if you could check why Z (blue) values go up to 5 or so.

@buddhi1980
Copy link
Owner

@MeyerAdrian
About normal maps and blue range (form 0 to 1.0). I have corrected two things in the code:

  • added final normalization of normal vector to ensure that it's length is 1 (can be different because of numeric errors)
  • added clamping of value when image is saved

About appearance of colors in floating point EXR formats:
In Mandelbulber 2.14 there was a bug and image in EXR format was saved from 16-bit integer buffer instead of floating point. Now it saves image in full dynamic range. Brightness of many pixels can be much higher than 1.0. How EXR image looks depends very strong on the program which is used to display the image. Viewer have to reinterpret very high dynamic range to RGB range from 0-255. It can be done by clamping or by linear/logarithmic conversion. Load the same image into different programs like GIMP 2.0, exrview... You will see that looks different.

@ghost
Copy link

ghost commented Jun 16, 2019

What software leverages these additional layers? @buddhi1980 are you familiar with the post processing software? What features do these layers enhance?

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Jun 16, 2019

@mancoast

What software leverages these additional layers? @buddhi1980 are you familiar with the post processing software? What features do these layers enhance?

It enables you for instance to to relighting in Post. Also animated lights etc. that are much harder to control in Mandelbulber itself.
https://www.youtube.com/watch?v=iIAcZD9GhLY&t=608s

And also because you have the 3D information from Position Pas, you can for instance render clouds or other 3d objects in normal CG apps, and render them with the fractal point cloud as "Matte Holdout", so the fractal occludes the other cg obejcts where they overlap, and then you can perfectly combine them in post.

@zebastian
Copy link
Collaborator Author

@buddhi1980 i will rewrite the code to use only the 32bit buffers and simplify / generalize the code in general...

@buddhi1980
Copy link
Owner

@zebastian , we cannot use only 32-bit buffers because they cannot be used for word coordinates and specular light (which has very wide dynamic range). The best will be to keep additional channels only in floating point format.

@zebastian
Copy link
Collaborator Author

sure i mean float: float == 32bit

@zebastian
Copy link
Collaborator Author

ok, generalized the saving of the channels here: fec9749
I can get rid of the 16 and 8 bit channels almost, these are still needed for jpg.
What do you think we best do with those?

@zebastian
Copy link
Collaborator Author

options (with my opinion):

  • keep the 8 bit buffers in cimage (not so good)
  • remove optional channels from jpg (not so good)
  • create temporary 8bit buffers on the fly in the save routine in a generalized way (maybe best?)

@buddhi1980
Copy link
Owner

I prefer third option

@zebastian
Copy link
Collaborator Author

OK, here it is: 01f1013

@zebastian
Copy link
Collaborator Author

also cleaned up the code and removed unnecessary buffers: afbb4fa

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Jun 16, 2019

One thing that I realised...
When saving as JPG or PNG and additional image channels are toggled, it exports all channels as separate files, which makes total sense because these formats don't have multichannel support.

When exporting as EXR though, it puts all channels in one Image.
That might be handy sometimes, but it makes it impossible to export for instance the Color, Alpha and Specular Channels as 16Bit and Normal and Position as 32Bit.
(Thats the standard workflow in 3D, to keep things leightweight because you only need the full 32bit precision for the vector channels mostly. Also its is often more handy to have them in separate files because each file is smaller and if you dont need all channels in a situation, you dont have to load the huge EXR containing everything all the time.)

So it would be a great little feature to have "Export each Channel as separate File". (-;
What do you think?

@zebastian
Copy link
Collaborator Author

Would be possible, but I think that's the win of exr: have all data in one file.
Please also note: you can mix different precisions for each channel in one file.

@adrianmeyerart
Copy link
Contributor

Oh sorry I thought it wasn't possible to have mixed precisions, but your right! That works. Pretty cool EXR feature.

That defenitely makes this less important.
Still the feature would be handy in some situations I think. When youre rendering a 4096*4096px VR Image Sequence with all layers enabled, that leads to huge single EXR file sizes.
So it is much fatser to have them in seperate leighweight files in this case and pull the extra layers in just at the point when needed.

Another advantage would be, to work around the color issues with linear EXR as described above.
So you could export the main Image as 16Bit sRGB PNG for instance and the extra layers as 32Bit EXR.

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Jun 19, 2019

Little offtopic here, but don't know if your still watching that thread, have a quick qeustion for the image metadata:
b6a6c1b

@adrianmeyerart
Copy link
Contributor

Hey guys,
any news here?
Best
Adrian

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Jul 30, 2019

Hey,
you might have seen, that I started working on a little Mandelbulber VFX Toolset.
https://github.com/MeyerAdrian/mandelbulber_vfx_tools

I would love to continue with it and expand it a bit and share it...

For this I am still really hoping to have the features we discussed in here in Mandelbulber.
I really appreciate your work on Mandelbulber, it's a really cool Software thats allowing me to do crazy stuff that would be very hard to achieve in only traditional VFX Tools.
So don't get me wrong, just really hoping on these couple things as I can't continue working on the tools without them.

Thanks!!

To summarize:

  • World Position Image Layer (already done, nice!)
  • World Normal Image Layer (2.19 still only has Camera Space Normals, Would be great to add that as a option!)
  • Have both these Image Layers working with OpenCL (and Equirectangular Stereo).
    Having to render these passes sperately in CPU Mode would cause 10000s of GHz extra rendering power. It would just be such a pitty, because OpenCL is implemnted so well by now, and basically everything else perfectly works!

Less Priority Stuff

  • Know what the Camera FOV value means in degrees (b6a6c1b)
  • Know how to calculate Camera Translate XYZ and Rotate XYZ from main_camera_xyz, main_target_xyz, main_camera_top_xyz
  • AA in OpenCL (in non MC Mode)
  • Investigate that strange sRGB / linear behaviour we discussed above
  • Have that "Connect Fractal Detail Level" Bug fixed

@adrianmeyerart
Copy link
Contributor

BTW. 2.19 looks amazing so far!!

@ghost
Copy link

ghost commented Jul 30, 2019 via email

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Jul 30, 2019

Hey,
answered here.
#698 (comment)
So let me know if all of you think this should be included in the repo. And if how exactly.

Best!
Adrian

@adrianmeyerart
Copy link
Contributor

Hey guys,
any news on that?
Really craving for the World Normal Pass and OpenCL Support for World Normal and World Position Pass.

Hey,
you might have seen, that I started working on a little Mandelbulber VFX Toolset.
https://github.com/MeyerAdrian/mandelbulber_vfx_tools

I would love to continue with it and expand it a bit and share it...

For this I am still really hoping to have the features we discussed in here in Mandelbulber.
I really appreciate your work on Mandelbulber, it's a really cool Software thats allowing me to do crazy stuff that would be very hard to achieve in only traditional VFX Tools.
So don't get me wrong, just really hoping on these couple things as I can't continue working on the tools without them.

Thanks!!

To summarize:

  • World Position Image Layer (already done, nice!)
  • World Normal Image Layer (2.19 still only has Camera Space Normals, Would be great to add that as a option!)
  • Have both these Image Layers working with OpenCL (and Equirectangular Stereo).
    Having to render these passes sperately in CPU Mode would cause 10000s of GHz extra rendering power. It would just be such a pitty, because OpenCL is implemnted so well by now, and basically everything else perfectly works!

Less Priority Stuff

  • Know what the Camera FOV value means in degrees (b6a6c1b)
  • Know how to calculate Camera Translate XYZ and Rotate XYZ from main_camera_xyz, main_target_xyz, main_camera_top_xyz
  • AA in OpenCL (in non MC Mode)
  • Investigate that strange sRGB / linear behaviour we discussed above
  • Have that "Connect Fractal Detail Level" Bug fixed

@buddhi1980
Copy link
Owner

I'm really craving for more hands writing code

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Sep 9, 2019

I believe you that.
A pitty that I have no clue of C++ :-/ Would love to help

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Sep 11, 2019

Hey,
could you maybe point me to the right directions / script locations for World Normal Image Layer and OpenCL World Normal and World Position features?

I contacted a colligue of mine who knows C++ and a bit of OpenCL, we will try to have a look at it.

Thanks!

PS.
some informatuion about Building process etc. would also be cool!

@buddhi1980
Copy link
Owner

To add more image layers especially for OpenCL is quite complex. It is difficult to explain in few sentences.

About building process, here are notes how to start: https://github.com/buddhi1980/mandelbulber2#easy-preparation-for-development
In general you need Debian or Ubuntu Linux and will mostly work in QtCreator IDE.

@adrianmeyerart
Copy link
Contributor

Thanks,
I feared the OpenCL thing would be complex.
Could you point me to the already implemented Screenspace Normal Pass though?
I guess this should be doable to just port it to world space and add the pass.

And just so that we can have a look, even though it will prob. be too complex, why not point me to lets say the OpenCL Scripts for zDepth Pass.

@buddhi1980
Copy link
Owner

zDepth pass is always rendered (when fractal is calculated - no special code for it) because it is needed for many effects and UI. Normal pass channel is optional channel. Optional channels are not yet implemented in OpenCL code and it will be quite difficult (of course doable). It is not easy because OpenCL cannot use dynamic arrays, so many tricks have to be used. From other side I don't want to render this layer every time because it consumes precious GPU memory.
Adding World Normal in CPU code will be easier, because you can look how Screenspace Nornal layer is added.
Data to image layers is written here:
https://github.com/buddhi1980/mandelbulber2/blob/master/mandelbulber2/src/render_worker.cpp#L445
Data of optional image layers is stored here:
https://github.com/buddhi1980/mandelbulber2/blob/master/mandelbulber2/src/cimage.hpp#L333
Saving of image layers is in file:
https://github.com/buddhi1980/mandelbulber2/blob/master/mandelbulber2/src/file_image.cpp

From these points you should be able to follow more parts of the code.

@adrianmeyerart
Copy link
Contributor

Alright thanks!
Will have a look

@adrianmeyerart
Copy link
Contributor

Hey,
still no chance of getting P and N working with OpenCL?

@adrianmeyerart
Copy link
Contributor

adrianmeyerart commented Mar 29, 2020

hey,
it would be very very handy to have a the option for a seperate "luminosity" image layer.
(because this is a partcular layer where you often do a lot of post processing, glows etc.)

if possible it would also be great to have that for "global illumination" as well (would store just the MC GI contribution). And "reflection" layer. Or does this get included with "Specular" Layer?
seperate "volumetric" layer would also be awesome.

thanks!

PS.
these layers, just as "diffuse" and "specular" image layers also don't work with OpenCl.
wouldn't it be possible to work around this by internally using 3 float layers or something similar?
as this would only be optional the memory overhead for having these additional layers in OpenCL would be no problem, if you want these layers you have to pay the price, if not, not?.

@adrianmeyerart
Copy link
Contributor

also just noticed that the "diffuse" layer is a unshaded flat diffuse color.
this is not really usefull. would be great if this could be tha shaded diffuse component of the render (i.e. including shadows)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants