-
Notifications
You must be signed in to change notification settings - Fork 125
CI test #1054
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI test #1054
Conversation
|
CodeAnt AI is reviewing your PR. |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨No code suggestions found for the PR. |
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR contains a minor documentation improvement to the m_bubbles module. The change removes the article "the" from the module's brief description comment for better grammatical clarity.
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughUpdated module wording in Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Caller as Caller
participant Solver as s_riemann_solver
participant Flux as Flux/Source Logic
participant GPU as GPU/OpenACC
Caller->>Solver: call s_riemann_solver(dir_idx)
Note right of Solver: compute idx1 = dir_idx(1)
Solver->>Flux: build L/R states using idx1
Flux->>Flux: accumulate nbub_L / nbub_R\ncompute ptilde_L / ptilde_R
alt avg_state == 2
Flux->>Flux: apply ptilde-driven pressure adjustments
end
Flux->>GPU: include idx1, nbub_* in private/copyin lists
Flux->>Solver: return fluxes & source terms
Solver->>Caller: deliver results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Files/areas to pay extra attention to:
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 1 file
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1054 +/- ##
==========================================
- Coverage 44.35% 44.35% -0.01%
==========================================
Files 71 71
Lines 20589 20587 -2
Branches 1990 1993 +3
==========================================
- Hits 9133 9132 -1
+ Misses 10311 10310 -1
Partials 1145 1145 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
CodeAnt AI is running Incremental review |
|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/simulation/m_riemann_solvers.fpp (1)
2593-2601: Uninitializednbub_L/nbub_Rin ME4 bubble-variable flux lead to undefined resultsIn the
model_eqns == 4branch, the “Add advection flux for bubble variables” loop usesnbub_Landnbub_Rin the flux computation, but these scalars are never assigned in this branch before use. They are only computed later in themodel_eqns == 2 .and. bubbles_eulerpath. This yields undefined values (compiler‑dependent garbage) in the bubble-variable flux whenever ME4+bubbles is active.Please initialize
nbub_Landnbub_Rin the ME4 block before the loop (e.g., by computing them from the same bubble-state data that should define number density, or by refactoring the nbub computation from the ME2-bubbles path into a shared helper and calling it here as well).
🧹 Nitpick comments (1)
src/simulation/m_riemann_solvers.fpp (1)
2657-2663: Directional indexing refactor viaidx1/idxilooks correct and consistentThe introduction of
idx1(normal-velocity index) andidxi(component index) in the HLLC bubble branches keeps all uses aligned withdir_idx(1)anddir_idx(i)while improving clarity of normal vs tangential components.idx1is declaredprivatein GPU regions and is set fromdir_idx(1)before first use in each loop, so there’s no cross-iteration contamination. Cylindrical/geometric flux updates that were switched toidx1also remain consistent with the existingdir_idx/dir_flgconventions.No functional issues spotted; this refactor looks safe.
Also applies to: 2868-2896, 2956-2973, 2990-3014, 3050-3086, 3096-3376
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/phoenix/submit-bench.sh(1 hunks)src/simulation/m_riemann_solvers.fpp(10 hunks)
✅ Files skipped from review due to trivial changes (1)
- .github/workflows/phoenix/submit-bench.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Code Cleanliness Check
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Build & Publish
| nbub_L = qL_prim_rs${XYZ}$_vf(j, k, l, n_idx) | ||
| nbub_R = qR_prim_rs${XYZ}$_vf(j + 1, k, l, n_idx) | ||
| else | ||
| nbub_L_denom = 0._wp | ||
| nbub_R_denom = 0._wp | ||
| nbub_L = 0._wp | ||
| nbub_R = 0._wp | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do i = 1, nb | ||
| nbub_L_denom = nbub_L_denom + (R0_L(i)**3._wp)*weight(i) | ||
| nbub_R_denom = nbub_R_denom + (R0_R(i)**3._wp)*weight(i) | ||
| nbub_L = nbub_L + (R0_L(i)**3._wp)*weight(i) | ||
| nbub_R = nbub_R + (R0_R(i)**3._wp)*weight(i) | ||
| end do | ||
| nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L_denom | ||
| nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R_denom | ||
|
|
||
| nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L | ||
| nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R | ||
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard division by nbub_L/nbub_R when reconstructing number density
In the ME2+bubbles avg_state == 2 path, nbub_L and nbub_R are computed via a weighted sum of R0^3 and then used as denominators:
nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_RIf the R0^3 * weight sums are zero or extremely small, this will generate NaNs/inf and destabilize the solver.
Recommend guarding with max(..., sgm_eps) (as done elsewhere in this file for similar normalizations), e.g.:
- nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
- nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R
+ nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/max(nbub_L, sgm_eps)
+ nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/max(nbub_R, sgm_eps)🤖 Prompt for AI Agents
In src/simulation/m_riemann_solvers.fpp around lines 2772 to 2785, the code
divides by nbub_L and nbub_R when reconstructing number density in the
avg_state==2 path, which can be zero or tiny; change the denominators to use a
guarded value (e.g. denom_L = max(nbub_L, sgm_eps) and denom_R = max(nbub_R,
sgm_eps) using the same sgm_eps used elsewhere in this file) and divide by those
guarded values so the assignments use denom_L and denom_R to avoid NaN/inf.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 2 files (reviewed changes from recent commits).
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="src/simulation/m_riemann_solvers.fpp">
<violation number="1" location="src/simulation/m_riemann_solvers.fpp:2783">
`nbub_L`/`nbub_R` store the weighted \(R_0^3\) sums and can legitimately be zero when no bubbles exist, so dividing by them directly will produce NaN/Inf and destabilize the ME2+bubbles solver. Clamp those denominators with `sgm_eps` (as elsewhere in this file) before performing the normalization.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
| nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L_denom | ||
| nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R_denom | ||
|
|
||
| nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nbub_L/nbub_R store the weighted (R_0^3) sums and can legitimately be zero when no bubbles exist, so dividing by them directly will produce NaN/Inf and destabilize the ME2+bubbles solver. Clamp those denominators with sgm_eps (as elsewhere in this file) before performing the normalization.
Prompt for AI agents
Address the following comment on src/simulation/m_riemann_solvers.fpp at line 2783:
<comment>`nbub_L`/`nbub_R` store the weighted \(R_0^3\) sums and can legitimately be zero when no bubbles exist, so dividing by them directly will produce NaN/Inf and destabilize the ME2+bubbles solver. Clamp those denominators with `sgm_eps` (as elsewhere in this file) before performing the normalization.</comment>
<file context>
@@ -2771,15 +2772,16 @@ contains
- nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L_denom
- nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R_denom
+
+ nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
+ nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R
end if
</file context>
| nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L | |
| nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/max(nbub_L, sgm_eps) |
|
CodeAnt AI is running Incremental review |
|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/simulation/m_riemann_solvers.fpp (1)
2760-2775: Guard division bynbub_L/nbub_Rwhen reconstructing bubble number densityIn the
avg_state == 2ME2+bubbles path (non-qbmm,adv_n == .false.),nbub_Landnbub_Rare built as weighted sums ofR0^3and then used directly as denominators:nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_RIf the
R0^3 * weightsums vanish or are extremely small (e.g., no bubbles in a cell), this will generate Inf/NaN and can destabilize the ME2+bubbles solver.Clamp the denominators with
sgm_eps(as elsewhere in this file for similar normalizations), e.g.:- do i = 1, nb - nbub_L = nbub_L + (R0_L(i)**3._wp)*weight(i) - nbub_R = nbub_R + (R0_R(i)**3._wp)*weight(i) - end do - - nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L - nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R + do i = 1, nb + nbub_L = nbub_L + (R0_L(i)**3._wp)*weight(i) + nbub_R = nbub_R + (R0_R(i)**3._wp)*weight(i) + end do + + nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/max(nbub_L, sgm_eps) + nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/max(nbub_R, sgm_eps)This keeps the normalization consistent with the rest of the module and prevents NaN/Inf when the moment sum is zero.
🧹 Nitpick comments (2)
src/simulation/m_riemann_solvers.fpp (2)
2930-2976: Bubble-pressure correction foravg_state == 2is numerically safeThe new
avg_state == 2block that adjustspres_L/pres_RusingPbwR3*bar,R3*bar, andR3V2*baris guarded by checks onalpha_(L/R)(num_fluids)andR3(L/R)bar < small_alf, so the divisions byR3Lbar/R3Rbaronly occur when those moments are well-resolved. That avoids obvious divide-by-zero issues and keeps the ptilde-style correction well behaved.If you touch this again, consider pulling the repeated pattern into a small helper to reduce duplication across left/right.
3334-3551: HLLC 5‑equation refactor maintains structure but remains hard to maintainThe refactored 5‑equation HLLC path (wave‑speed 2 logic,
xi_*factors, fluxes for mass/momentum/energy/species, and cylindricalflux_gsrc_rs) still matches the standard Einfeldt/Toro-style HLLC structure and consistently usesdir_idx(1)andxi_L/xi_Racross all fluxes. I don’t see new correctness issues here.That said, this block is extremely long and duplicated in spirit across ME3/ME4/ME2; extracting common helpers for computing
s_L/s_R/s_S,xi_*, and generic flux assembly would make future changes less error‑prone.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/simulation/m_riemann_solvers.fpp(26 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: Code Cleanliness Check
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Self Hosted (gpu, omp, gt)
- GitHub Check: Self Hosted (gpu, acc, gt)
- GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
- GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
- GitHub Check: Oak Ridge | Frontier (CCE) (gpu)
- GitHub Check: Oak Ridge | Frontier (CCE) (gpu)
- GitHub Check: Georgia Tech | Phoenix (NVHPC) (cpu)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Build & Publish
🔇 Additional comments (1)
src/simulation/m_riemann_solvers.fpp (1)
1952-1958: Bubble-related locals and GPU private lists look consistentThe new bubble scalars (
nbub_L/R,ptilde_L/R,PbwR3*,R3*) are correctly declared once and added to the relevantGPU_PARALLEL_LOOPprivate lists in the ME3, ME4, and ME2+bubbles paths, so each thread gets its own copy and there’s no shared-state race. I don’t see issues with these additions.Also applies to: 1997-1997, 2405-2405, 2649-2649
|
CodeAnt AI is running Incremental review |
|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/simulation/m_riemann_solvers.fpp (1)
2651-2837: Guard nbub_L/nbub_R normalization in ME2+bubbles avg_state==2 and confirm new p̃/flux logic
- In the ME2 +
bubbles_euler/avg_state == 2path,nbub_L/nbub_Rare first accumulated as
(\sum (R_0^3, \text{weight})) and then used as denominators:nbub_L = (3._wp/(4._wp*pi))qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
nbub_R = (3._wp/(4._wppi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_RIf the weighted `R0^3` sum vanishes or becomes tiny (no bubbles or degenerate distribution), this produces NaN/Inf and can destabilize the ME2+bubbles solver. Same concern was raised in a previous review. - Recommend clamping the denominators with `sgm_eps` (as done elsewhere for similar normalizations) so that the number density reconstruction remains bounded, e.g.: ```diff - nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L - nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R + nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/max(nbub_L, sgm_eps) + nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/max(nbub_R, sgm_eps)
- The new
avg_state == 2pressure adjustments usingPbwR3*/R3*andR3V2*/R3*already guard onalpha_*(num_fluids)andR3* < small_alf, so those divisions look numerically safe.- The updated bubble-variable advection fluxes that now scale by
nbub_L/nbub_Rin ME2 and ME4 are consistent with advecting per-bubble quantities, but once the above normalization is clamped, it would be good to re-run the ME2+bubbles regression cases to confirm no unintended change in effective bubble-number transport.Also applies to: 2933-3055
🧹 Nitpick comments (1)
src/simulation/m_riemann_solvers.fpp (1)
1954-2355: HLLC ME3 directional-index refactor and new bubble accumulators
- The additions of
nbub_L,nbub_R,PbwR3Lbar,PbwR3Rbarand the extended GPUprivatelist are local tos_hllc_riemann_solverand not used in the ME3 branch itself, so they won’t change current ME3 behavior. No issues there.- The rewrites of the elastic-wave contact speed
s_S,xi_L,xi_R,vel_K_Star, and the associated mass/momentum/energy fluxes to usevel_L(dir_idx(1))/vel_R(dir_idx(1))anddir_idx(i)are algebraically consistent with the existing HLL/HLLC patterns and should preserve correctness across sweep directions.- In the elastic energy-flux correction (
flux_ene_e), outer factors now usevel_L(dir_idx(i))/vel_R(dir_idx(i))but the inner(s_S - vel_L(i))and(s_L - vel_L(i))terms still index with plaini. Ifiis meant to be the permuted component index (viadir_idx), consider switching those tovel_L(dir_idx(1))orvel_L(dir_idx(i))for full directional consistency, or confirm that the current mixed indexing is intentional.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/simulation/m_riemann_solvers.fpp(26 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/simulation/m_riemann_solvers.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_riemann_solvers.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/simulation/m_riemann_solvers.fpp
🧠 Learnings (11)
📚 Learning: 2025-11-24T21:50:16.684Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.684Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:16.684Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.684Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:16.684Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.684Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:16.684Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.684Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules
Applied to files:
src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to **/*.{fpp,f90} : Favor array operations over explicit loops when possible; use `collapse=N` directive to optimize nested loops
Applied to files:
src/simulation/m_riemann_solvers.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
- GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
- GitHub Check: Oak Ridge | Frontier (CCE) (gpu)
- GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Code Cleanliness Check
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Self Hosted (cpu, none, frontier)
- GitHub Check: Self Hosted (gpu, omp, gt)
- GitHub Check: Self Hosted (gpu, omp, frontier)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Self Hosted (gpu, acc, gt)
- GitHub Check: Self Hosted (cpu, none, gt)
- GitHub Check: Self Hosted (gpu, acc, frontier)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Build & Publish
🔇 Additional comments (1)
src/simulation/m_riemann_solvers.fpp (1)
3337-3555: 5‑equation HLLC directional indexing and geometrical source fluxes
- The replacements of raw component indices with
dir_idx(1)/dir_idx(i)anddir_flg(dir_idx(i))in wave speeds, mass/momentum/energy fluxes, volume-fraction/chemistry fluxes, and cylindrical/3D geometric source terms appear internally consistent: normal contributions always usedir_idx(1), tangential ones usedir_idx(2:3), and the same pattern is mirrored influx_gsrc_rs${XYZ}.- Low‑Mach corrections and pcorr terms are still multiplied by the appropriate directional flags, so the refactor should preserve existing behavior across sweeps.
User description
User description
Vanilla CI test to see what passes/fails
PR Type
Documentation
Description
Fix documentation comment formatting in
m_bubbles.fppRemove redundant article "the" from module brief description
Diagram Walkthrough
File Walkthrough
m_bubbles.fpp
Fix module documentation comment formattingsrc/simulation/m_bubbles.fpp
CodeAnt-AI Description
Align Riemann solver direction handling and bubble pressure closure
What Changed
Impact
✅ Accurate directional fluxes✅ Stable bubble pressure closures✅ Longer NVHPC benchmarks💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
Documentation
Chores
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.