Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best-effort rendering when GPU does not support full dataset #4424

Merged
merged 20 commits into from Feb 17, 2020

Conversation

philippotto
Copy link
Member

@philippotto philippotto commented Jan 31, 2020

Before, a client whose GPU didn't support enough data texture to render all layers of a dataset crashed when opening that dataset. This PR improves this in two ways:

  1. the more accurate value MAX_TEXTURE_IMAGE_UNITS is queried to check the client's compatibility with the current dataset.
  2. If the GPU textures do not suffice, the user is notified that only N layers can be rendered simultaneously. That error appears is shown if and only if more than N layers are enabled in the settings.

Regardless of how many layers are activated, the shaders are compiled so that exactly N layers are always accessed. If the user changes the layers so that a data texture has to be bound which wasn't on the GPU before, the shader is recompiled accordingly. The set of N layers which should be available on the GPU is determined by remembering the least-recently used (but currently disabled) layers. That way, fast toggling of a layer should always be possible without the performance hit which follows from the dynamic shader recompilation.

Todo:

  • clean up
  • factor segmentation layers in
  • throttle recompilation

URL of deployed dev instance (used for testing):

Steps to test:

  • on my machine, I verified that everything works as usual, since my GPU supports up to 32 simultaneous textures. I also monkey-patched the value to 8 to test the dynamic recompilation with a new dataset (see nine_float_layer_dataset)
  • I also tested the branch on one macbook.
  • Would be good to test it on windows and another medium-spec'ed linux (@daniel-wer ? :))

Issues:


@philippotto philippotto self-assigned this Jan 31, 2020
@philippotto philippotto changed the title [WIP] Best-effort rendering when GPU does not support full dataset Best-effort rendering when GPU does not support full dataset Feb 3, 2020
Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really great stuff! 🥇

Thanks for making it very easy to test this.
I hit the limit using the nine-float-dataset on my machine and toggling the layers worked. The good thing is that the fast-toggling works very well due to your LRU implementation, also Chrome seems to be good at caching past shader compilations. We'll probably have to live with the fact that the initial compile time is rather high and users won't really know what's going on for now.

I was wondering, however, why I am able to open the nine-float-dataset on the master without any restrictions but wK complains on this branch. As I understand it, this could only happen if there was one layer with more textures than the others, but I don't think that's the case here? Do you know why this happens?

@philippotto
Copy link
Member Author

I was wondering, however, why I am able to open the nine-float-dataset on the master without any restrictions but wK complains on this branch. As I understand it, this could only happen if there was one layer with more textures than the others, but I don't think that's the case here? Do you know why this happens?

Hm, this is a good question. We should clarify this before merging. Let's talk about this next week!

Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@philippotto
Copy link
Member Author

I was wondering, however, why I am able to open the nine-float-dataset on the master without any restrictions but wK complains on this branch. As I understand it, this could only happen if there was one layer with more textures than the others, but I don't think that's the case here? Do you know why this happens?

Hm, this is a good question. We should clarify this before merging. Let's talk about this next week!

For the record: Daniels laptop supports 32 textures, which is why the dataset can be rendered on the current master. The MESA workaround had the effect that the dataset doesn't render completely on this branch. However, we think, that this is fine, since the performance with MESA and 32 textures is quite low, anyway. So, falling back to 16 is reasonable.

@bulldozer-boy bulldozer-boy bot merged commit 8d662a2 into master Feb 17, 2020
@bulldozer-boy bulldozer-boy bot deleted the better-gpu-error-msg-graceful-rendering branch February 17, 2020 10:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Better error message if there are too many layer for GPU
2 participants