Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always aim to compute the full (ad-hoc) mesh #6352

Merged
merged 3 commits into from Jul 27, 2022
Merged

Always aim to compute the full (ad-hoc) mesh #6352

merged 3 commits into from Jul 27, 2022

Conversation

philippotto
Copy link
Member

@philippotto philippotto commented Jul 26, 2022

This PR removes the hard mesh chunk limit and instead introduces a soft throttle mechanism to avoid putting full load for the mesh generation for a single segment (e.g., a huge segment could take forever to load and should not cause a tremendous load on client + backend).

URL of deployed dev instance (used for testing):

Steps to test:

Issues:


(Please delete unneeded items, merge only when none are left open)

@philippotto philippotto self-assigned this Jul 26, 2022
Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Let's see whether we can tune these values in the future to speed up the ad-hoc meshing. The current config is rather conservative, but I can see why.

Maybe it could work better to let the server decide when to throttle and by how much. For example, the server could have an ad-hoc meshing worker queue and depending on the number of users using the feature in parallel, the requests would be faster or slower. This way, users could enjoy maximum speed if the server load is low.

@philippotto
Copy link
Member Author

Maybe it could work better to let the server decide when to throttle and by how much. For example, the server could have an ad-hoc meshing worker queue and depending on the number of users using the feature in parallel, the requests would be faster or slower. This way, users could enjoy maximum speed if the server load is low.

Yes, this is probably a good idea. The back-end might even just block a request for the time until the pool has a free slot. That way, the front-end wouldn't even need to accept another return value. Let's keep this in mind for the future :)
I'm curious to see how this PR will change things.

@philippotto philippotto enabled auto-merge (squash) July 27, 2022 12:46
@philippotto philippotto merged commit 16fb22c into master Jul 27, 2022
@philippotto philippotto deleted the full-mesh branch July 27, 2022 13:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve isosurface loading for rendering a single cell
2 participants