Skip to content

Conversation

@OndraMichal
Copy link
Contributor

US-8268:
This part implements set and get functions for Mesh and Optimization settings incl. unit tests:
get_optimization_settings
set_optimization_settings
get_mesh_settings
set_mesh_settings
get_model_info

@OndraMichal
Copy link
Contributor Author

OndraMichal commented Jan 27, 2022

Excuse the changes in:
Examples/Cantilever/Demo1.py, Examples/main.py, RFEM/initModel.py, and UnitTests/test_MeshGenerationStatistics.py.
Those are just minor adjustments.

…Mesh&OptimizationSettings

unit test: 110 passed, 15 skipped in 94.60s (0:01:34)
Copy link
Contributor

@dogukankaratas dogukankaratas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more thing to discuss about Optimization Settings;

It works fine even without activating the AddOn for optimization. It can cause some problems about licensing in the future.

Technically, it works well.

'windsimulation_mesh_config_value_keep_results_if_mesh_deleted': None,
'windsimulation_mesh_config_value_consider_surface_thickness': None,
'windsimulation_mesh_config_value_run_rwind_silent': None}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better explain users, which configuration has controlled by which data type(integer, string, boolean or enumeration item).

There are two possible ways:

  • Add default RFEM values instead of 'None'.
  • Add DocStrings to explain configurations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well I can not add default values. I have to differ which parameters are set and which are not. This class is just prototype to show user, which parameters can be set. Frankly, I'm not able to describe them more eloquently than the name of the parameter itself :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe adding type of parameters in DocStrings can help us to document this. That looks like my duty 👍🏻

Model Check Process Object Groups Option Type
'''
CROSS_LINES, CROSS_MEMBERS, DELETE_UNUSED_NODES, UNITE_NODES_AND_DELETE_UNUSED_NODES = range(4)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enumeration items of 'Shape of Finite Elements' should be added.

image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fiexd by adding this enum into enums.py and unit test.

@pull-request-quantifier-deprecated

This PR has 308 quantified lines of changes. In general, a change size of upto 200 lines is ideal for the best PR experience!


Quantification details

Label      : Large
Size       : +283 -25
Percentile : 70.8%

Total files changed: 9

Change summary by file extension:
.py : +283 -25

Change counts above are quantified counts, based on the PullRequestQuantifier customizations.

Why proper sizing of changes matters

Optimal pull request sizes drive a better predictable PR flow as they strike a
balance between between PR complexity and PR review overhead. PRs within the
optimal size (typical small, or medium sized PRs) mean:

  • Fast and predictable releases to production:
    • Optimal size changes are more likely to be reviewed faster with fewer
      iterations.
    • Similarity in low PR complexity drives similar review times.
  • Review quality is likely higher as complexity is lower:
    • Bugs are more likely to be detected.
    • Code inconsistencies are more likely to be detetcted.
  • Knowledge sharing is improved within the participants:
    • Small portions can be assimilated better.
  • Better engineering practices are exercised:
    • Solving big problems by dividing them in well contained, smaller problems.
    • Exercising separation of concerns within the code changes.

What can I do to optimize my changes

  • Use the PullRequestQuantifier to quantify your PR accurately
    • Create a context profile for your repo using the context generator
    • Exclude files that are not necessary to be reviewed or do not increase the review complexity. Example: Autogenerated code, docs, project IDE setting files, binaries, etc. Check out the Excluded section from your prquantifier.yaml context profile.
    • Understand your typical change complexity, drive towards the desired complexity by adjusting the label mapping in your prquantifier.yaml context profile.
    • Only use the labels that matter to you, see context specification to customize your prquantifier.yaml context profile.
  • Change your engineering behaviors
    • For PRs that fall outside of the desired spectrum, review the details and check if:
      • Your PR could be split in smaller, self-contained PRs instead
      • Your PR only solves one particular issue. (For example, don't refactor and code new features in the same PR).

How to interpret the change counts in git diff output

  • One line was added: +1 -0
  • One line was deleted: +0 -1
  • One line was modified: +1 -1 (git diff doesn't know about modified, it will
    interpret that line like one addition plus one deletion)
  • Change percentiles: Change characteristics (addition, deletion, modification)
    of this PR in relation to all other PRs within the repository.


Was this comment helpful? 👍  :ok_hand:  :thumbsdown: (Email)
Customize PullRequestQuantifier for this repository.

@OndraMichal OndraMichal merged commit 5e2ed29 into main Mar 10, 2022
@OndraMichal OndraMichal deleted the OndrejMichal-Mesh&OptimizationSettings branch March 10, 2022 06:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants