Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is pt suitable for simple real-time rendering? Can this be implemented? #11

Closed
Splizard opened this issue Feb 3, 2016 · 10 comments
Closed

Comments

@Splizard
Copy link

Splizard commented Feb 3, 2016

It would be great to have a simple real time rendering method on the CPU in pure Go that could be used to preview a model before rendering, is this feasible? When I say real time I mean rendering to a image type in a decent fraction of a second.

@fogleman
Copy link
Owner

fogleman commented Feb 3, 2016

Not really. Most renders can take quite a while. Most of the 2560x1440px examples on the README took 30+ minutes, for example.

@fogleman fogleman closed this as completed Feb 3, 2016
@mattgodbolt
Copy link

Even using GPU resources and a very much simplified path tracer it takes a while to render with path tracing: this is the best I can find at doing "interactive" rendering. It's quite a lot to expect a pure CPU implementation could keep up.

@joeblew99
Copy link

Am wondering if cycle based rendering helps ?
This is when the precision is lowered.

The other way is to allow tile based rendering. At least this allows speeding things up by throwing more servers at the problem.

@fogleman
Copy link
Owner

fogleman commented Feb 5, 2016

"Throwing more servers at it" is already easily doable. Each server can produce its own image and then the images can be combined. Since each server starts with a different random seed, the sampling will be just as if a single machine did 2x, 3x, etc the work.

This is what the combine script in the repo is for. :)

@joeblew99
Copy link

Tiles based is different from what you do now ? To render one image 50
times faster, can I use 50 servers ?

I saw the combine script and was wondering...

On Fri, 5 Feb 2016, 03:00 Michael Fogleman notifications@github.com wrote:

"Throwing more servers at it" is already easily doable. Each server can
produce its own image and then the images can be combined. Since each
server starts with a different random seed, the sampling will be just as if
a single machine did 2x, 3x, etc the work.

This is what the combine script in the repo is for. :)


Reply to this email directly or view it on GitHub
#11 (comment).

@fogleman
Copy link
Owner

fogleman commented Feb 5, 2016

@joeblew99 See my slides on path tracing: https://speakerdeck.com/fogleman/path-tracing

Specifically the section on Noise. Maybe that will help you understand.

Here's the video if you want to watch: https://www.youtube.com/watch?v=x4oQsQ76OHY

@joeblew99
Copy link

Yes i saw it thanks. I see what it is
This is all super but can I please get a straight answer to the original
question pretty please ?????

On Fri, 5 Feb 2016, 14:29 Michael Fogleman notifications@github.com wrote:

@joeblew99 https://github.com/joeblew99 See my slides on path tracing:
https://speakerdeck.com/fogleman/path-tracing

Specifically the section on Noise. Maybe that will help you understand.


Reply to this email directly or view it on GitHub
#11 (comment).

@joeblew99
Copy link

It's just that I have used many rendering engines and yet to see anyone
solve the shared memory model aspects for distributed rendering of photonic
ray tracing.

On Fri, 5 Feb 2016, 17:35 Joe Blue joeblew99@gmail.com wrote:

Yes i saw it thanks. I see what it is
This is all super but can I please get a straight answer to the original
question pretty please ?????

On Fri, 5 Feb 2016, 14:29 Michael Fogleman notifications@github.com
wrote:

@joeblew99 https://github.com/joeblew99 See my slides on path tracing:
https://speakerdeck.com/fogleman/path-tracing

Specifically the section on Noise. Maybe that will help you understand.


Reply to this email directly or view it on GitHub
#11 (comment).

@fogleman
Copy link
Owner

fogleman commented Feb 5, 2016

@joeblew99

To render one image 50 times faster, can I use 50 servers?

Yes.

Tiles based is different from what you do now?

It is different.

If you wanted to run 256 samples per pixel, you could:

  • Run 256 on 1 machine
  • Run 128 on 2 machines
  • Run 64 on 4 machines

Etc.

Then you literally average the images and get a result equivalent to the full 256 samples.

@joeblew99
Copy link

I would like to try it. Will have a look at see if the code has some sort
of render director.

On Fri, 5 Feb 2016, 23:27 Michael Fogleman notifications@github.com wrote:

@joeblew99 https://github.com/joeblew99

To render one image 50 times faster, can I use 50 servers?

Yes.

Tiles based is different from what you do now?

It is different.

If you wanted to run 256 samples per pixel, you could:

  • Run 256 on 1 machine
  • Run 128 on 2 machines
  • Run 64 on 4 machines

Etc.

Then you literally average the images and get a result equivalent to the
full 256 samples.


Reply to this email directly or view it on GitHub
#11 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants