Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of noise in GLSL #4222

Closed
MarkuBu opened this issue Jun 15, 2016 · 9 comments

Comments

Projects
None yet
8 participants
@MarkuBu
Copy link
Contributor

commented Jun 15, 2016

One of the most used functions in Minetest is perlin noise. What about porting the noise function to GLSL? This should give a huge boost and I think the implementation wouldn't be very hard if we use this

https://github.com/ashima/webgl-noise

I was able to compile the benchmark and the demo and both work.

I would do it myself, but I can't

@kahrl

This comment has been minimized.

Copy link
Contributor

commented Jun 15, 2016

It's difficult to send data back to the CPU from GLSL shaders, they're designed to output to the screen. Something like OpenCL or CUDA would most likely be required.

@MarkuBu

This comment has been minimized.

Copy link
Contributor Author

commented Jun 15, 2016

Should be possible too. This is just an example of a quick search

https://github.com/skeeto/perlin-noise/blob/opencl/src/com/nullprogram/noise/perlin3d.cl

@asl97

This comment has been minimized.

Copy link
Contributor

commented Jun 16, 2016

If this is added as OpenCL.
(I would think CUDA be out of the question since it only supported by nvidia GPU)

What's the plan on supporting those device that doesn't have GPU that support OpenCL?

Bundling Portable Computing Language (pocl)?

@Calinou

This comment has been minimized.

Copy link
Member

commented Jun 16, 2016

What's the plan on supporting those device that doesn't have GPU that support OpenCL?

I think the plan would be to keep a CPU fallback, and to use the GPU where it is available (eg. while playing singleplayer or a listen server on a desktop or a laptop). But is it really worth it?

Bundling Portable Computing Language (pocl)?

What about Windows/OS X/Android support?

@MarkuBu

This comment has been minimized.

Copy link
Contributor Author

commented Jun 16, 2016

But is it really worth it?

I think, because calculating noise takes time and it is used a lot in Minetest. But I don't know if it is trivial to implement to be efficient enough that it worth it. I'm not a noise expert

And of course there should be a fallback for systems without a GPU

@kwolekr

This comment has been minimized.

Copy link
Contributor

commented Jul 4, 2016

This is a bad idea unless you were using the perlin noise exclusively for visual effects -

  • Transferring data to and from the GPU would take a lot of time and probably negate any computation savings that would result from this.
  • Noise already uses a highly-optimized algorithm for bulk generation that minimizes the amount of time spent on generating the noise itself, so the only thing that could be optimized would be the intermediate step to interpolate noise for creating the gradients.
  • The piecemeal manner in which shaders operate makes it impossible to re-implement the optimized algorithm, and the much slower point-polled method of computing noise would be required instead.
  • This is not supported on all GPUs, so it would bloat code by needing to keep the original CPU-based perlin noise algorithms around as a backup.
  • Perlin noise is exclusively generated on the server-side as of right now, and servers don't usually have powerful GPUs if any at all, so the benefit wouldn't be as great as imagined.
  • GPU-based noise adds an additional, unnecessarily large layer of complexity.
  • Effort is better spent on just about any other part of this project. Speeding noise up even further is a very low priority task.

Feel free to code it if you'd like, but I won't approve merging such a patch to upstream for the specified reasons.

@HybridDog

This comment has been minimized.

Copy link
Contributor

commented Aug 17, 2016

There's shader perlin and simplex noise: https://github.com/pioneerspacesim/pioneer/blob/master/data/shaders/opengl/noise.glsl

Transferring data to and from the GPU would take a lot of time and probably negate any computation savings that would result from this.

If you compress it (e.g. zlib) it wouldn't need to transfer so much, would it?

Effort is better spent on just about any other part of this project.

Which parts do you mean? Avoid using a single meshnode, resurrecting the texture atlas and adding dynamic lightmaps generation?

@sofar

This comment has been minimized.

Copy link
Member

commented Sep 7, 2016

GPU's are optimized for large amounts of input data (textures, vertices, etc) and outputting it to the display. They are not optimized for outputting large amounts of data back to the CPU.

This works well if you're doing e.g. hash bruteforcing or calculations, since it requires very little input and output, and massive amounts of independent work. GPU's are good at that.

For noise generation however, you're inputting little data and outputting massive amounts of data. The inputs are x/y/z pairs and perlin params, the output can be millions of bytes of data.

This doesn't make much sense for servers to do on the GPU. There likely is no GPU. There likely is no GPU access if there is a GPU.

This doesn't make much sense for singleplayer setups, because the GPU will be busy rendering frames.

So, @kwolekr is spot-on about this not making much sense.

Also, mapgen is already done in 2 independent server threads, so it's already parallellized. Perhaps that could be improved upon, but doing it on the GPU isn't the right idea.

@paramat

This comment has been minimized.

Copy link
Member

commented Sep 8, 2016

👎

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.