Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Higher auto-refinement limits #285

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@
Version 4.2.1 (development)
===========================

- Increased the default auto-refinement limits to maximum of 32 refinement
levels or 5M elements. Note that you may still need to press 'o' if the mesh
is larger and/or appears not as curved as it should be.

- Significantly improved memory usage.

- Add support to visualize solutions on 1D elements embedded in 2D and 3D.
Expand Down
4 changes: 2 additions & 2 deletions lib/vsdata.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1270,8 +1270,8 @@ void VisualizationSceneScalarData::Init()
arrow_type = arrow_scaling_type = 0;
scaling = 0;
drawaxes = colorbar = 0;
auto_ref_max = 16;
auto_ref_max_surf_elem = 20000;
auto_ref_max = 32;
auto_ref_max_surf_elem = 5000000;
Comment on lines -1273 to +1274
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we have a need to increase auto_ref_max -- 16 is more than sufficient for any practical case. For extreme cases it can be increased manually with the keys.

Also, I think increasing auto_ref_max_surf_elem to $5\times 10^6$ is too excessive. An increase by 25x from $2\times 10^4$ to $5\times 10^5$ is more reasonable, however, even that may cause undesired, noticeable slowdowns, especially for lower order meshes (e.g. linear meshes) -- note that the refinement is applied independent of degree, so that curvature from the non-linear terms in $Q_1$ meshes/fields can be captured.

Another option is to keep the current value and instead issue a warning in the terminal (we may want to add a feature where terminal messages are shown for a brief moment in the main window and then fade away) when the auto-selected refinement factor is less than the max of the orders of the mesh and the order of the field.

Copy link
Member Author

@tzanio tzanio Jun 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what we want ideally is that these are set to the maximum values for what a "reasonable" mesh may need, and then for GLVis to select the actual auto-refinement values based on the order of the mesh (and solution).

For example auto_ref_max should be at least max{ element_order, gf_order}, maybe scaled by a factor of 2?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, maximum of some reasonable value and a value based on the grid function + mesh order 👍 , but it could be capped by some higher number maybe? To give the user a chance to change the value and not burn off the computer right away 🔥 😄

Copy link
Member

@v-dobrev v-dobrev Jun 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me, "reasonable" auto-selected refinement factor is mainly driven by the time it takes to display the initial mesh. If possible, displaying the initial mesh should not take longer than ~0.1-0.3 sec on relatively new hardware. If we roughly say that the time to display some surface is proportional to to the point evaluations we need to perform, then we need to choose an upper bound for auto_ref_max_surf_elem which gives the desired time limit. Of course, achieving this limit may not be possible when the number of the drawn surface elements is too big.

In the methods VisualizationSceneSolution::GetAutoRefineFactor() and VisualizationSceneSolution3d::GetAutoRefineFactor(), the number of point evaluations is computed as ne*(ref+1)*(ref+1) where ne is the number of surface elements in the mesh (i.e. number of elements in 2D meshes, embedded in 2D or 3D, or the number of boundary elements for 3D meshes). So the question is: how many point evaluations can we do in the allowed ~0.1-0.3 sec -- we can debate what exact value to aim for in this range. The evaluation speed per point will depend to some extend on the orders of the mesh and the field, so to make things more concrete, let's say we should measure the evaluation times for $Q_2$ meshes/fields.

Does this sound like a "reasonable" approach to determine the upper limit auto_ref_max_surf_elem?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regardless of how we do the auto-selection of refinement level, users will need to be aware that for big meshes they may not get any element sub-divisions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There could be if order_ref will give a number lower than 20k, or no? 🤔 .

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order_ref is just an initial value for ref before the while loop (when the if is true). So if order_ref < 16 && ne*(order_ref+1)*(order_ref+1) <= 100000, you will get more refinements than order_ref and no less refinements than before.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem, for me, with @tzanio's suggestion (even when the while loop is moved after the if else statement) is that it can result in big jumps in speed for small change in the input. For example, if ne*(order_ref+1)*(order_ref+1) is close but less than 2M we get ref=order_ref and if we increase ne just a little to push ne*(order_ref+1)*(order_ref+1) beyond 2M, then can immediately go down to ref=1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, like this, yes, in the modified version it would work, but I thought it is more logical to iterate to the same limiting number as in the criterion for order_ref to keep continuity and not jump down, as you say.

Copy link
Contributor

@najlkin najlkin Jul 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I was proposing would solve it (updated version to have the same numbers):

int ne = mesh->GetNE(), ref = 1;
int autoref_max = min(max(ne*(order_ref+1)*(order_ref+1), 100k), 2M)
while (ref < 16 && ne*(ref+1)*(ref+1) <= autoref_max) { ref++; }

(which implies ref==order_ref if it is in the range and order_ref < 16)

minv = 0.0;
maxv = 1.0;
logscale = false;
Expand Down
Loading