Skip to content

Conversation

@bjoernsteinhagen
Copy link
Contributor

@bjoernsteinhagen bjoernsteinhagen commented Jul 12, 2022

Current status: 155 passed, 9 skipped

9 skipped:

  • test_DesignOverview.py (s) (Bugfix G-30112)
  • test_ModelCheck.py (s) (Method does not exist: model_check__get_object_groups_operation)
  • test_SectionDialogue.py (s) (Method does not exist: create_section_favorite_list)
  • test_concreteDesign.py (s) (Type does not exsit: concrete_durability)
  • test_objectInformation.py (sss) (Type does not exist: array_of_get_center_of_gravity_and_objects_info_elements_type)
  • test_steelEffectiveLengths.py (s) (Bugfix G-30112)
  • test_nodalLoad.py (Assert commented out) (Bugfix G-30467)

Björn Steinhagen added 24 commits July 5, 2022 10:47
- All Add-Ons included in test definition
- 7 Add-Ons currently unavailable in RFEM GUI, these Add-Ons were commented out in test definition
- All remaining Add-On assertions passed test
- Concentrated varying load distributions updated. The delta_distance parameter is automatically calculated and thus does not need to be given from User.
- All tests and assertions pass
- Same updates as per line loads for parameter inputs regarding 'relative distance'
- Large proportion of function calls for all combinations of member loads (69 in total). These were all called without assertions. Creating furterh 69 assertions (min.) not deemed as necessary. Select assertions per member load type added
- Bugfix G-30467: Individual Mass Components
- Asserts pertaining to this bug commented out
- nodalLoad.py updated such that mass_global within else statement (which makes more sense since this only applies for if individual_mass_components == False)
- Asserts added. No issues or problems found
- Assertions added for unit test file
- Bugs found and fixed for rotary motion load type
- Missing assertions added
- DocStrings within thickness.py for layers needed some additional information
- fictitious_thickness for shape orthotropy needs some attention. Commented out for now
- Assertions addedd. No issues.
- Assertions added. No issues
- Bug found in Surface.Membrane(). Default was set to TYPE_WITHOUT_THICKNESS. Changed to TYPE_MEMBRANE
- NOTE: Attention needs to be paid to DocStrings.
- Bug in the type assignment in surface.py corrected
- Minor bugs in surfacesetload.py fixed
SpectralAnalysisSettings parameter signed_results_using_dominant_mode doesn't work correctly
test_DesignOverview and test_steelEffectiveLengths are solved in branch OndrejMichal_testsCorrection

FAILED ..\Sources\RFEM_Python_Client_4\UnitTests\test_DesignOverview.py::test_designOverview - TypeError: 'int' object is not subscriptable
FAILED ..\Sources\RFEM_Python_Client_4\UnitTests\test_SpectralSettings_test.py::test_spectral_analysis_settings - assert False == True
FAILED ..\Sources\RFEM_Python_Client_4\UnitTests\test_steelEffectiveLengths.py::test_steelEffectiveLengths - suds.TypeNotFound: Type not found: 'no'
@pull-request-quantifier-deprecated

This PR has 780 quantified lines of changes. In general, a change size of upto 200 lines is ideal for the best PR experience!


Quantification details

Label      : Extra Large
Size       : +629 -151
Percentile : 92.67%

Total files changed: 34

Change summary by file extension:
.py : +629 -151

Change counts above are quantified counts, based on the PullRequestQuantifier customizations.

Why proper sizing of changes matters

Optimal pull request sizes drive a better predictable PR flow as they strike a
balance between between PR complexity and PR review overhead. PRs within the
optimal size (typical small, or medium sized PRs) mean:

  • Fast and predictable releases to production:
    • Optimal size changes are more likely to be reviewed faster with fewer
      iterations.
    • Similarity in low PR complexity drives similar review times.
  • Review quality is likely higher as complexity is lower:
    • Bugs are more likely to be detected.
    • Code inconsistencies are more likely to be detected.
  • Knowledge sharing is improved within the participants:
    • Small portions can be assimilated better.
  • Better engineering practices are exercised:
    • Solving big problems by dividing them in well contained, smaller problems.
    • Exercising separation of concerns within the code changes.

What can I do to optimize my changes

  • Use the PullRequestQuantifier to quantify your PR accurately
    • Create a context profile for your repo using the context generator
    • Exclude files that are not necessary to be reviewed or do not increase the review complexity. Example: Autogenerated code, docs, project IDE setting files, binaries, etc. Check out the Excluded section from your prquantifier.yaml context profile.
    • Understand your typical change complexity, drive towards the desired complexity by adjusting the label mapping in your prquantifier.yaml context profile.
    • Only use the labels that matter to you, see context specification to customize your prquantifier.yaml context profile.
  • Change your engineering behaviors
    • For PRs that fall outside of the desired spectrum, review the details and check if:
      • Your PR could be split in smaller, self-contained PRs instead
      • Your PR only solves one particular issue. (For example, don't refactor and code new features in the same PR).

How to interpret the change counts in git diff output

  • One line was added: +1 -0
  • One line was deleted: +0 -1
  • One line was modified: +1 -1 (git diff doesn't know about modified, it will
    interpret that line like one addition plus one deletion)
  • Change percentiles: Change characteristics (addition, deletion, modification)
    of this PR in relation to all other PRs within the repository.


Was this comment helpful? 👍  :ok_hand:  :thumbsdown: (Email)
Customize PullRequestQuantifier for this repository.

Copy link
Contributor

@OndraMichal OndraMichal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue with SpectralAnalysisSettings parameter signed_results_using_dominant_mode waits on Olga.


Model.clientModel.service.finish_modification()

assert Model.clientModel.service.get_modal_analysis_settings(1).acting_masses_about_axis_x_enabled == False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ModalAnalysisSettings() doesn't work. All asserts are failing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asserts are now passing :-)

@OndraMichal
Copy link
Contributor

All tests missing asserts are covered. Skipped were only those that pytest ignore (@pytest.mark.skipif).

Björn Steinhagen added 3 commits July 20, 2022 10:25
- Missing blank line, last line of scripts
- Spectral analysis AddOn
- Examining failing unit tests
@bjoernsteinhagen bjoernsteinhagen self-assigned this Jul 20, 2022
@bjoernsteinhagen
Copy link
Contributor Author

See main comment at top of pull request regarding status of skipped unit tests.

@OndraMichal
Copy link
Contributor

158 passed, 7 skipped in 199.21s version 6.02.0023.119.f2143bd6214

@OndraMichal OndraMichal merged commit 39e3ab5 into main Jul 28, 2022
@OndraMichal OndraMichal deleted the BjoernSteinhagen_UnitsTestsMissingAsserts branch July 28, 2022 05:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants