Skip to content

Conversation

@LukeSeifert
Copy link
Collaborator

Summary of changes

The changes include addition of more data pulled in with the data download bash script, as well as two new examples that make use of this new data. This addition is part of my proposed work (addition of more datasets), though there are more datasets that will need to be added. This data was added now in order to get an initial estimate for the effects of varying datasets.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • I have read the CONTRIBUTING document.
  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

Associated Issues and PRs

  • Issue: #

Associated Developers

  • Dev: @

Checklist for Reviewers

Reviewers should use this link to get to the
Review Checklist before they begin their review.

@LukeSeifert LukeSeifert self-assigned this Nov 12, 2025
@LukeSeifert LukeSeifert mentioned this pull request Jan 12, 2026
9 tasks
@LukeSeifert LukeSeifert requested a review from LP1012 January 14, 2026 20:18
Copy link

@LP1012 LP1012 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I LOVE that you have specified the return type of your python functions. Please never stop doing that.

Everything looks good except for one function I saw didn't have a docstring -- should be a really quick thing to address. I am trusting your values because I am certainly not going to check them :-).

  • Read the PR description
  • Read through all the changes, considering the following questions.
    • Is there anything you can praise about this PR? Start with
      praise.
    • Are variable names brief but descriptive?
    • Are new/changed functions no longer than a paragraph?
    • Do all function parameters have default values where appropriate?
    • Is the code clear and clean? (see Robert C. Martin's
      Clean Code)
    • Is there enough documentation?
    • Does the programming style meet the requirements of the
      repository (PEP8 for python,
      google C++ style guide, etc.)
    • If a new feature has been added, or a bug fixed, has a test been
      added to confirm good behavior?
    • Does the test actually test the new/changed functionality?
    • Does the test successfully test edge cases?
    • Does the test successfully test corner cases?
    • If the repository has continuous integration: does the PR pass
      the tests?
    • If there is no continuous integration, check out the branch
      locally, build, and run the tests.
    • Do the tests pass on your machine?
    • Is the PR free of random cruft (built files, .swp files,
      etc.)?
    • Do all files in the PR belong in the repository?
    • If the PR deletes files, is this appropriate?
    • If the PR adds files or data, are these new files compatible
      with the repository license?
    • Make a review, leaving kind comments and suggesting changes
      where needed (to resolve the above).
    • Has the author resolved all of the comments and suggestions
      in your review?
  • When you approve of the PR, merge and close it.
  • Does this PR close an issue? If so, be sure to descriptively
    close this issue when the PR is merged
  • Thank the author for their contribution.

max_nfev=1e5,
args=(times, counts, fit_function))

args=(times, counts, count_err, fit_function))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have docstrings for this function? I am not seeing anything...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question! It isn't visible in the difference, but the fit_function variable is set to be either the Grouper._pulse_fit_function or Grouper._saturation_fit_function. Both of those functions do have docstrings (line 127 and line 161; the fit_function is set to one of them in lines 241-247).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, my bad, I misread the code thinking this was a new function not a function call.

@LukeSeifert LukeSeifert requested a review from LP1012 January 19, 2026 14:55
Copy link
Contributor

@bryanjp05 bryanjp05 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good. I just had a few clarifying questions about various parts of the code.


NFY_ZIP_NAME="ENDF-B-${ENDF_VERSION}${SEPARATOR}nfy.zip"
NFY_URL="https://www.nndc.bnl.gov/endf-b7.1/zips/${NFY_ZIP_NAME}"
NFY_URL="https://www.nndc.bnl.gov/endf-b${INTEGER_VALUE}.${DIGIT_PART}/zips/${NFY_ZIP_NAME}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where are INTEGER_VALUE and DIGIT_PART coming from?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

INTEGER_VALUE comes from line 37, and translates the roman numeral to an integer value. The DIGIT_PART comes from line 36, and just extracts the number after the period so that it is generically formatted (enabling using of different ENDF data)



# JEFF --------------------------------------------------------------------
JEFF_VERSION="3.1.1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come JEFF only works as version 3.1.1?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's just the only version I've added so far! In a later PR I'll likely be adding more data

Comment on lines +12 to +16
"unprocessed_data_dir": "/home/luke/github/mosden/mosden/data/unprocessed/",
"processed_data_dir": "/home/luke/github/mosden/examples/pure_endfb71/",
"output_dir": "/home/luke/github/mosden/examples/pure_endfb71/",
"log_level": 20,
"log_file": "/home/luke/github/mosden/examples/pure_endfb71/log.log"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will these directory lines later be changed to a standardized format?

Copy link
Collaborator Author

@LukeSeifert LukeSeifert Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this raises a good point. The default arguments are available (if the user does not provide a parameter) that can auto-fill these parameters with the correct path for users, so the only reason to include this is to demonstrate that these parameters are available. I think for now this could be left as-is, but I will create an issue (#40) that will:

  1. Remove hard-coded paths from examples
  2. Add documentation that shows users what parameters exist in the input, what the options are, and what those options do

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, sounds like a good plan.

result = linregress(data_val, group_val)
try:
result = linregress(data_val, group_val)
pcc = result.rvalue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious as to what pcc stands for?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pearson Correlation Coefficient! I don't like using acronyms for variable names, but pearson_corr_coef was going to be a bit clunky to work with at some parts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, makes sense.

data : dict[str, dict[str: float]]
Dictionary containing the processed data.
"""
import openmc.data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you import openmc.data in each function instead of just once at the beginning of the file?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question! The main reason I'm doing it is because MoSDeN can be run entirely without OpenMC. Although importing it once at the start would be easier (since I would only need to import it once), a user running the model that does not include any OpenMC features could get a module import error regarding OpenMC even though they don't need it. The cost for having these imports replicated and rerun ~3 times is also fairly small (see https://stackoverflow.com/questions/29504313/python-what-is-the-cost-of-re-importing-modules).

@LukeSeifert LukeSeifert requested a review from bryanjp05 January 20, 2026 15:43
Copy link

@LP1012 LP1012 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I'll approve on the condition that you address the feedback from the other reviewer.

@bryanjp05 bryanjp05 merged commit c7b6175 into arfc:main Jan 22, 2026
1 check passed
@LukeSeifert LukeSeifert deleted the endf-data branch January 27, 2026 15:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants