-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Higher auto-refinement limits #285
Open
tzanio
wants to merge
2
commits into
master
Choose a base branch
from
higher-auto-ref
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we have a need to increase
auto_ref_max
-- 16 is more than sufficient for any practical case. For extreme cases it can be increased manually with the keys.Also, I think increasing$5\times 10^6$ is too excessive. An increase by $2\times 10^4$ to $5\times 10^5$ is more reasonable, however, even that may cause undesired, noticeable slowdowns, especially for lower order meshes (e.g. linear meshes) -- note that the refinement is applied independent of degree, so that curvature from the non-linear terms in $Q_1$ meshes/fields can be captured.
auto_ref_max_surf_elem
to25x
fromAnother option is to keep the current value and instead issue a warning in the terminal (we may want to add a feature where terminal messages are shown for a brief moment in the main window and then fade away) when the auto-selected refinement factor is less than the max of the orders of the mesh and the order of the field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we want ideally is that these are set to the maximum values for what a "reasonable" mesh may need, and then for GLVis to select the actual auto-refinement values based on the order of the mesh (and solution).
For example
auto_ref_max
should be at leastmax{ element_order, gf_order}
, maybe scaled by a factor of 2?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, maximum of some reasonable value and a value based on the grid function + mesh order 👍 , but it could be capped by some higher number maybe? To give the user a chance to change the value and not burn off the computer right away 🔥 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me, "reasonable" auto-selected refinement factor is mainly driven by the time it takes to display the initial mesh. If possible, displaying the initial mesh should not take longer than ~0.1-0.3 sec on relatively new hardware. If we roughly say that the time to display some surface is proportional to to the point evaluations we need to perform, then we need to choose an upper bound for
auto_ref_max_surf_elem
which gives the desired time limit. Of course, achieving this limit may not be possible when the number of the drawn surface elements is too big.In the methods$Q_2$ meshes/fields.
VisualizationSceneSolution::GetAutoRefineFactor()
andVisualizationSceneSolution3d::GetAutoRefineFactor()
, the number of point evaluations is computed asne*(ref+1)*(ref+1)
wherene
is the number of surface elements in the mesh (i.e. number of elements in 2D meshes, embedded in 2D or 3D, or the number of boundary elements for 3D meshes). So the question is: how many point evaluations can we do in the allowed ~0.1-0.3 sec -- we can debate what exact value to aim for in this range. The evaluation speed per point will depend to some extend on the orders of the mesh and the field, so to make things more concrete, let's say we should measure the evaluation times forDoes this sound like a "reasonable" approach to determine the upper limit
auto_ref_max_surf_elem
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regardless of how we do the auto-selection of refinement level, users will need to be aware that for big meshes they may not get any element sub-divisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There could be if
order_ref
will give a number lower than 20k, or no? 🤔 .There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
order_ref
is just an initial value forref
before thewhile
loop (when theif
is true). So iforder_ref < 16 && ne*(order_ref+1)*(order_ref+1) <= 100000
, you will get more refinements thanorder_ref
and no less refinements than before.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem, for me, with @tzanio's suggestion (even when the
while
loop is moved after theif else
statement) is that it can result in big jumps in speed for small change in the input. For example, ifne*(order_ref+1)*(order_ref+1)
is close but less than 2M we getref=order_ref
and if we increasene
just a little to pushne*(order_ref+1)*(order_ref+1)
beyond 2M, then can immediately go down toref=1
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha, like this, yes, in the modified version it would work, but I thought it is more logical to iterate to the same limiting number as in the criterion for
order_ref
to keep continuity and not jump down, as you say.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I was proposing would solve it (updated version to have the same numbers):
(which implies
ref==order_ref
if it is in the range andorder_ref < 16
)