Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfixes for multiple groups and scalar variable transforms #542

Merged
merged 5 commits into from
Mar 6, 2024

Conversation

peverwhee
Copy link
Collaborator

Description

Updates to enable more than one scheme group, and also enable unit transforms for scalar (non-dimensioned) variables.

Updated Files

Groups:

  • ccpp_suite.py - don't promote loop variables to suite level
  • host_cap.py - fix for iterating over suite part list
  • suite_objects.py - check if variable is at suite level before giving up hope and throwing an error

Scalar transforms:

  • suite_objects.py - don't try to do vertical dimension transform for scalar variable; conditionally write comments for var transform calls
  • var_props.py - don't include parenthesis (e.g. variable1() = 2 * variable2()) for scalar variable transforms

Misc:

  • metavar.py - fix for bug surely introduced by me converting to fstrings at some point

User interface changes?: No

Testing:
test removed:
unit tests:
system tests: modified capgen_test to include multiple groups
manual testing:

Copy link
Collaborator

@gold2718 gold2718 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some questions, suggestions, and concerns

@@ -483,7 +483,7 @@ def write_host_cap(host_model, api, module_name, output_dir, run_env):
# Look for any loop-variable mismatch
for suite in api.suites:
spart_list = suite_part_list(suite, stage)
for spart in spart_list:
for _, spart in sorted(enumerate(spart_list)):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does the sorted(enumerate(spart_list)) buy here? What is wrong with just going through the spart_list?
If it is some sort new of 'best practice', why here and line 533 but not line 566?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good question! I had added "sorted" to line 533 on a different branch to keep the use statements in a consistent order, which broke when you use multiple groups (thereby invalidating the whole point of the change of course), so that line became "sorted(enumerate(" and then I just migrated the full change to the other loop for... reasons that made sense at the time.

I'll revert these changes here and will bring in the sorted changes (there are are more than just this one) in a future PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverted

Comment on lines 1539 to 1547
vdim_name = vert_dim.split(':')[-1]
group_vvar = self.__group.call_list.find_variable(vdim_name)
vname = group_vvar.get_prop_value('local_name')
lindices[vdim] = '1:'+vname
rindices[vdim] = '1:'+vname
if compat_obj.has_vert_transforms:
rindices[vdim] = vname+':1:-1'
# end if
# end if
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do vertical dimensions have to begin at 1? Should we use variable starting point if provided?

Suggested change
vdim_name = vert_dim.split(':')[-1]
group_vvar = self.__group.call_list.find_variable(vdim_name)
vname = group_vvar.get_prop_value('local_name')
lindices[vdim] = '1:'+vname
rindices[vdim] = '1:'+vname
if compat_obj.has_vert_transforms:
rindices[vdim] = vname+':1:-1'
# end if
# end if
vdims = vert_dim.split(':')
vdim_name = vdims[-1]
group_vvar = self.__group.call_list.find_variable(vdim_name)
if group_vvar is None:
raise CCPPError(f"add_var_transform: Cannot find dimension variable, {vdim_name}")
# end if
vname = group_vvar.get_prop_value('local_name')
if len(vdims) == 2:
sdim_name = vdims[0]
group_vvar = self.find_variable(sdim_name)
if group_vvar is None:
raise CCPPError(f"add_var_transform: Cannot find dimension variable, {sdim_name}")
# end if
sname = group_vvar.get_prop_value('local_name')
else:
sname = '1'
# end if
lindices[vdim] = sname+':'+vname
if compat_obj.has_vert_transforms:
rindices[vdim] = vname+':'+sname+':-1'
else:
rindices[vdim] = sname+':'+vname
# end if
# end if

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smart! changed.

if compat_obj.has_vert_transforms:
rindices[vdim] = vname+':1:-1'
# end if
# end if
vdim_name = vert_dim.split(':')[-1]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this section (1548 - 1554) a repeat of the stuff that was moved inside the if vdim >= 0: (line 1538)? If so, should a test have caught this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch - the only code that definitely needed to be moved inside the if block was vdims = vert_dim.split(':') so a test wouldn't have caught this. that said, I did add some more testing that would've caught the original bug.

@@ -1614,20 +1624,25 @@ def write(self, outfile, errcode, errmsg, indent):
for (var, internal_var) in self.__var_debug_checks:
stmt = self.write_var_debug_check(var, internal_var, cldicts, outfile, errcode, errmsg, indent+1)
# Write any reverse (pre-Scheme) transforms.
outfile.write('! Compute reverse (pre-scheme) transforms', indent+1)
if len(self.__reverse_transforms) > 0:
outfile.write('! Compute reverse (pre-scheme) transforms', indent+1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be:

Suggested change
outfile.write('! Compute reverse (pre-scheme) transforms', indent+1)
outfile.comment('Compute reverse (pre-scheme) transforms', indent+1)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

# Write the scheme call.
stmt = 'call {}({})'
outfile.write('',indent+1)
outfile.write('! Call scheme', indent+1)
outfile.write(stmt.format(self.subroutine_name, my_args), indent+1)
# Write any forward (post-Scheme) transforms.
outfile.write('',indent+1)
outfile.write('! Compute forward (post-scheme) transforms', indent+1)
if len(self.__forward_transforms) > 0:
outfile.write('! Compute forward (post-scheme) transforms', indent+1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also here:

Suggested change
outfile.write('! Compute forward (post-scheme) transforms', indent+1)
outfile.comment('Compute forward (post-scheme) transforms', indent+1)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed.

@peverwhee peverwhee requested a review from gold2718 March 5, 2024 20:18
Copy link
Collaborator

@gold2718 gold2718 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now, thanks!

Copy link
Collaborator

@climbfuji climbfuji left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this, looks great!

@climbfuji
Copy link
Collaborator

I'll go ahead and merge this, then we can update the other open capgen PR (#529) for which the discussion is still ongoing.

@climbfuji climbfuji merged commit ff28d15 into NCAR:feature/capgen Mar 6, 2024
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants