diff --git a/framework/doc/content/source/multiapps/TransientMultiApp.md b/framework/doc/content/source/multiapps/TransientMultiApp.md index 4206225b922e..3e98d963e727 100644 --- a/framework/doc/content/source/multiapps/TransientMultiApp.md +++ b/framework/doc/content/source/multiapps/TransientMultiApp.md @@ -14,6 +14,45 @@ sub-apps is utilized. The ability to do perform sub-cycling, which allows the su to perform multiple time steps per execution may be enabled using the [!param](/MultiApps/TransientMultiApp/sub_cycling) parameter. +## Time state of TransientMultiApps + +`TransientMultiApps` are "auto-advanced" by default whenever we are not doing +Picard iterations between the master and sub-application. This means that the +`Transient::endStep` and `Transient::postStep` methods of the sub-applications +executioner are called, regardless of whether the sub-application solve fails or +not. The `endStep` method increments the time and also performs +`EXEC_TIMESTEP_END` output. When sub-applications are auto-advanced, their +`endStep` call happens before the master application's `endStep` call. This has +the important benefit that when master application output occurs, the +sub-application's and master application's time states are the same, which +enables MOOSE restart/recovery capability. + +## Handing sub-application solve failures + +As noted above, the default behavior when running `TransientMultiApps` is that +their time state is incremented, e.g. they are "auto-advanced", regardless of +whether their solve is actually successful. This is undesirable behavior, but we +believe that the syncing of master and sub-application states, under normal +operation, to enable correct [checkpoint](/Checkpoint.md) output is a good +trade. Given the constraints of the elected design, there are still multiple ways to turn a failed +sub-application solve from a warning into an exception that will force corrective +behavior in either the sub- or master-application: + +1. The user can set `auto_advance = false` in the `Executioner` block of the + master application . This will cause the master application to immediately cut + its time-step when the sub-application fails. **However**, setting this + parameter to `false` also eliminates the possibility of doing restart/recover + because the master and sub will be out of sync if/when checkpoint output occurs. +2. The user can set `catch_up = true` in the `TransientMultiApp` block. This + will cause the sub-application to try and catch up to the master application + after a sub-app failed solve. If catch-up is unsuccessful, then MOOSE + registers this as a true failure of the solve, and the master dt will *then* + get cut. This option has the advantage of keeping the master and sub + transient states in sync, enabling accurate restart/recover data. + +In general, if the user wants sub-application failed solves to be treated as +exceptions, we recommend that option 2 over option 1. + ## Example Input File Syntax The following input file shows the creation of a TransientMultiApp object with the time step diff --git a/framework/doc/content/source/postprocessors/TimePostprocessor.md b/framework/doc/content/source/postprocessors/TimePostprocessor.md new file mode 100644 index 000000000000..7fb330705281 --- /dev/null +++ b/framework/doc/content/source/postprocessors/TimePostprocessor.md @@ -0,0 +1,9 @@ +# TimePostprocessor + +!syntax description /Postprocessors/TimePostprocessor + +!syntax parameters /Postprocessors/TimePostprocessor + +!syntax inputs /Postprocessors/TimePostprocessor + +!syntax children /Postprocessors/TimePostprocessor diff --git a/framework/include/executioners/PicardSolve.h b/framework/include/executioners/PicardSolve.h index a2aeb8eaaec8..8b7ec92abbff 100644 --- a/framework/include/executioners/PicardSolve.h +++ b/framework/include/executioners/PicardSolve.h @@ -76,6 +76,11 @@ class PicardSolve : public SolveObject _picard_self_relaxed_variables = vars; } + /** + * Whether sub-applications are automatically advanced no matter what happens during their solves + */ + bool autoAdvance() const; + protected: /** * Perform one Picard iteration or a full solve. @@ -162,4 +167,12 @@ class PicardSolve : public SolveObject Real _previous_entering_time; const std::string _solve_message; + + /// Whether the user has set the auto_advance parameter for handling advancement of + /// sub-applications in multi-app contexts + const bool _auto_advance_set_by_user; + + /// The value of auto_advance set by the user for handling advancement of sub-applications in + /// multi-app contexts + const bool _auto_advance_user_value; }; diff --git a/framework/include/multiapps/MultiApp.h b/framework/include/multiapps/MultiApp.h index 47ee6ee91b11..b119894d6ef5 100644 --- a/framework/include/multiapps/MultiApp.h +++ b/framework/include/multiapps/MultiApp.h @@ -117,9 +117,11 @@ class MultiApp : public MooseObject, * Calls multi-apps executioners' endStep and postStep methods which creates output and advances * time (not the time step; see incrementTStep()) among other things. This method is only called * for Picard calculations because for loosely coupled calculations the executioners' endStep and - * postStep methods are called from solveStep(). + * postStep methods are called from solveStep(). This may be called with the optional flag \p + * recurse_through_multiapp_levels which may be useful if this method is being called for the + * *final* time of program execution */ - virtual void finishStep() {} + virtual void finishStep(bool /*recurse_through_multiapp_levels*/ = false) {} /** * Save off the state of every Sub App diff --git a/framework/include/multiapps/TransientMultiApp.h b/framework/include/multiapps/TransientMultiApp.h index d8e3bb9d271e..4c5dab7e210f 100644 --- a/framework/include/multiapps/TransientMultiApp.h +++ b/framework/include/multiapps/TransientMultiApp.h @@ -41,7 +41,7 @@ class TransientMultiApp : public MultiApp virtual void incrementTStep(Real target_time) override; - virtual void finishStep() override; + virtual void finishStep(bool recurse_through_multiapp_levels = false) override; virtual bool needsRestoration() override; diff --git a/framework/include/postprocessors/TimePostprocessor.h b/framework/include/postprocessors/TimePostprocessor.h new file mode 100644 index 000000000000..0db229e2d59a --- /dev/null +++ b/framework/include/postprocessors/TimePostprocessor.h @@ -0,0 +1,31 @@ +//* This file is part of the MOOSE framework +//* https://www.mooseframework.org +//* +//* All rights reserved, see COPYRIGHT for full restrictions +//* https://github.com/idaholab/moose/blob/master/COPYRIGHT +//* +//* Licensed under LGPL 2.1, please see LICENSE for details +//* https://www.gnu.org/licenses/lgpl-2.1.html + +#pragma once + +#include "GeneralPostprocessor.h" + +/** + * Postprocessor that returns the current time + */ +class TimePostprocessor : public GeneralPostprocessor +{ +public: + static InputParameters validParams(); + + TimePostprocessor(const InputParameters & parameters); + + void initialize() override {} + void execute() override {} + + Real getValue() override; + +protected: + const FEProblemBase & _feproblem; +}; diff --git a/framework/include/problems/FEProblemBase.h b/framework/include/problems/FEProblemBase.h index 3a4f6bc8cd3f..fcd5ffe82cb2 100644 --- a/framework/include/problems/FEProblemBase.h +++ b/framework/include/problems/FEProblemBase.h @@ -1076,9 +1076,10 @@ class FEProblemBase : public SubProblem, public Restartable } /** - * Finish the MultiApp time step (endStep, postStep) associated with the ExecFlagType + * Finish the MultiApp time step (endStep, postStep) associated with the ExecFlagType. Optionally + * recurse through all multi-app levels */ - void finishMultiAppStep(ExecFlagType type); + void finishMultiAppStep(ExecFlagType type, bool recurse_through_multiapp_levels = false); /** * Backup the MultiApps associated with the ExecFlagType diff --git a/framework/src/executioners/PicardSolve.C b/framework/src/executioners/PicardSolve.C index 76873c022370..b441161d528d 100644 --- a/framework/src/executioners/PicardSolve.C +++ b/framework/src/executioners/PicardSolve.C @@ -99,6 +99,9 @@ PicardSolve::validParams() params.addParam("update_xfem_at_timestep_begin", false, "Should XFEM update the mesh at the beginning of the timestep"); + params.addParam("auto_advance", + "Whether to automatically advance sub-applications regardless of whether " + "their solve converges."); return params; } @@ -130,7 +133,9 @@ PicardSolve::PicardSolve(Executioner * ex) _xfem_update_count(0), _xfem_repeat_step(false), _previous_entering_time(_problem.time() - 1), - _solve_message(_problem.shouldSolve() ? "Solve Converged!" : "Solve Skipped!") + _solve_message(_problem.shouldSolve() ? "Solve Converged!" : "Solve Skipped!"), + _auto_advance_set_by_user(isParamValid("auto_advance")), + _auto_advance_user_value(_auto_advance_set_by_user ? getParam("auto_advance") : true) { if (_relax_factor != 1.0) // Store a copy of the previous solution here @@ -375,6 +380,20 @@ PicardSolve::solve() return converged; } +bool +PicardSolve::autoAdvance() const +{ + bool auto_advance = !(_has_picard_its && _problem.isTransient()); + + if (dynamic_cast(&_executioner) && _has_picard_its) + auto_advance = true; + + if (_auto_advance_set_by_user) + auto_advance = _auto_advance_user_value; + + return auto_advance; +} + bool PicardSolve::solveStep(Real begin_norm_old, Real & begin_norm, @@ -383,10 +402,7 @@ PicardSolve::solveStep(Real begin_norm_old, bool relax, const std::set & relaxed_dofs) { - bool auto_advance = !(_has_picard_its && _problem.isTransient()); - - if (dynamic_cast(&_executioner) && _has_picard_its) - auto_advance = true; + bool auto_advance = autoAdvance(); _executioner.preSolve(); diff --git a/framework/src/executioners/Transient.C b/framework/src/executioners/Transient.C index b6c60a5d9cba..c96fa7cd232e 100644 --- a/framework/src/executioners/Transient.C +++ b/framework/src/executioners/Transient.C @@ -314,10 +314,18 @@ Transient::execute() if (lastSolveConverged()) { _t_step++; - if (_picard_solve.hasPicardIteration()) + + /* + * Call the multi-app executioners endStep and + * postStep methods when doing Picard or when not automatically advancing sub-applications for + * some other reason. We do not perform these calls for loose-coupling/auto-advancement + * problems because Transient::endStep and Transient::postStep get called from + * TransientMultiApp::solveStep in that case. + */ + if (!_picard_solve.autoAdvance()) { - _problem.finishMultiAppStep(EXEC_TIMESTEP_BEGIN); - _problem.finishMultiAppStep(EXEC_TIMESTEP_END); + _problem.finishMultiAppStep(EXEC_TIMESTEP_BEGIN, /*recurse_through_multiapp_levels=*/true); + _problem.finishMultiAppStep(EXEC_TIMESTEP_END, /*recurse_through_multiapp_levels=*/true); } } @@ -365,11 +373,12 @@ Transient::incrementStepOrReject() /* * Call the multi-app executioners endStep and - * postStep methods when doing Picard. We do not perform these calls for - * loose coupling because Transient::endStep and Transient::postStep get - * called from TransientMultiApp::solveStep in that case. + * postStep methods when doing Picard or when not automatically advancing sub-applications for + * some other reason. We do not perform these calls for loose-coupling/auto-advancement + * problems because Transient::endStep and Transient::postStep get called from + * TransientMultiApp::solveStep in that case. */ - if (_picard_solve.hasPicardIteration()) + if (!_picard_solve.autoAdvance()) { _problem.finishMultiAppStep(EXEC_TIMESTEP_BEGIN); _problem.finishMultiAppStep(EXEC_TIMESTEP_END); diff --git a/framework/src/multiapps/TransientMultiApp.C b/framework/src/multiapps/TransientMultiApp.C index fbee4d0936af..cd434102dc5d 100644 --- a/framework/src/multiapps/TransientMultiApp.C +++ b/framework/src/multiapps/TransientMultiApp.C @@ -535,7 +535,7 @@ TransientMultiApp::incrementTStep(Real target_time) } void -TransientMultiApp::finishStep() +TransientMultiApp::finishStep(bool recurse_through_multiapp_levels) { if (!_sub_cycling) { @@ -544,6 +544,13 @@ TransientMultiApp::finishStep() Transient * ex = _transient_executioners[i]; ex->endStep(); ex->postStep(); + if (recurse_through_multiapp_levels) + { + ex->feProblem().finishMultiAppStep(EXEC_TIMESTEP_BEGIN, + /*recurse_through_multiapp_levels=*/true); + ex->feProblem().finishMultiAppStep(EXEC_TIMESTEP_END, + /*recurse_through_multiapp_levels=*/true); + } } } } @@ -585,7 +592,8 @@ TransientMultiApp::computeDT() return smallest_dt; } -void TransientMultiApp::resetApp( +void +TransientMultiApp::resetApp( unsigned int global_app, Real /*time*/) // FIXME: Note that we are passing in time but also grabbing it below { @@ -611,7 +619,8 @@ void TransientMultiApp::resetApp( } } -void TransientMultiApp::setupApp(unsigned int i, Real /*time*/) // FIXME: Should we be passing time? +void +TransientMultiApp::setupApp(unsigned int i, Real /*time*/) // FIXME: Should we be passing time? { auto & app = _apps[i]; Transient * ex = dynamic_cast(app->getExecutioner()); diff --git a/framework/src/postprocessors/TimePostprocessor.C b/framework/src/postprocessors/TimePostprocessor.C new file mode 100644 index 000000000000..3ac3ff2e96ed --- /dev/null +++ b/framework/src/postprocessors/TimePostprocessor.C @@ -0,0 +1,32 @@ +//* This file is part of the MOOSE framework +//* https://www.mooseframework.org +//* +//* All rights reserved, see COPYRIGHT for full restrictions +//* https://github.com/idaholab/moose/blob/master/COPYRIGHT +//* +//* Licensed under LGPL 2.1, please see LICENSE for details +//* https://www.gnu.org/licenses/lgpl-2.1.html + +#include "TimePostprocessor.h" +#include "FEProblem.h" + +registerMooseObject("MooseApp", TimePostprocessor); + +InputParameters +TimePostprocessor::validParams() +{ + InputParameters params = GeneralPostprocessor::validParams(); + params.addClassDescription("Reports the current time"); + return params; +} + +TimePostprocessor::TimePostprocessor(const InputParameters & parameters) + : GeneralPostprocessor(parameters), _feproblem(dynamic_cast(_subproblem)) +{ +} + +Real +TimePostprocessor::getValue() +{ + return _feproblem.time(); +} diff --git a/framework/src/problems/FEProblemBase.C b/framework/src/problems/FEProblemBase.C index 6784c55ee955..065b48f62cd9 100644 --- a/framework/src/problems/FEProblemBase.C +++ b/framework/src/problems/FEProblemBase.C @@ -4300,7 +4300,7 @@ FEProblemBase::incrementMultiAppTStep(ExecFlagType type) } void -FEProblemBase::finishMultiAppStep(ExecFlagType type) +FEProblemBase::finishMultiAppStep(ExecFlagType type, bool recurse_through_multiapp_levels) { const auto & multi_apps = _multi_apps[type].getActiveObjects(); @@ -4310,7 +4310,7 @@ FEProblemBase::finishMultiAppStep(ExecFlagType type) << std::endl; for (const auto & multi_app : multi_apps) - multi_app->finishStep(); + multi_app->finishStep(recurse_through_multiapp_levels); MooseUtils::parallelBarrierNotify(_communicator, _parallel_barrier_messaging); diff --git a/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/master.i b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/master.i new file mode 100644 index 000000000000..4f1356fa2f34 --- /dev/null +++ b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/master.i @@ -0,0 +1,111 @@ +[Mesh] + type = GeneratedMesh + dim = 2 + nx = 10 + ny = 10 + parallel_type = replicated +[] + +[Variables] + [./u] + [../] +[] + +[AuxVariables] + [./v] + [../] +[] + +[AuxKernels] + [./set_v] + type = FunctionAux + variable = v + function = 't' + [../] +[] + +[Kernels] + [./diff] + type = CoefDiffusion + variable = u + coef = 0.1 + [../] + [./coupled_force] + type = CoupledForce + variable = u + v = v + [../] + [./time] + type = TimeDerivative + variable = u + [../] +[] + +[BCs] + [./left] + type = DirichletBC + variable = u + boundary = left + value = 0 + [../] + [./right] + type = DirichletBC + variable = u + boundary = right + value = 1 + [../] +[] + +[Executioner] + type = Transient + solve_type = PJFNK + num_steps = 2 + petsc_options_iname = '-pc_type -pc_hypre_type' + petsc_options_value = 'hypre boomeramg' + picard_max_its = 1 + auto_advance = false +[] + +[MultiApps] + [./sub1] + type = TransientMultiApp + positions = '0 0 0' + input_files = picard_sub.i + execute_on = 'timestep_end' + [../] +[] + +[Transfers] + [./u_to_v2] + type = MultiAppNearestNodeTransfer + direction = to_multiapp + multi_app = sub1 + source_variable = u + variable = v2 + [../] + [time_to_sub] + type = MultiAppPostprocessorTransfer + from_postprocessor = time + to_postprocessor = master_time + direction = to_multiapp + multi_app = sub1 + [] + [dt_to_sub] + type = MultiAppPostprocessorTransfer + from_postprocessor = dt + to_postprocessor = master_dt + direction = to_multiapp + multi_app = sub1 + [] +[] + +[Postprocessors] + [time] + type = TimePostprocessor + execute_on = 'timestep_end' + [] + [dt] + type = TimestepSize + execute_on = 'timestep_end' + [] +[] diff --git a/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/picard_sub.i b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/picard_sub.i new file mode 100644 index 000000000000..05c89dca5a87 --- /dev/null +++ b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/picard_sub.i @@ -0,0 +1,161 @@ +[Mesh] + type = GeneratedMesh + dim = 2 + nx = 10 + ny = 10 +[] + +[Variables] + [./v] + [../] +[] + +[AuxVariables] + [./v2] + [../] + [./v3] + [../] + [./w] + [../] +[] + +[AuxKernels] + [./set_w] + type = NormalizationAux + variable = w + source_variable = v + normal_factor = 0.1 + [../] +[] + +[Kernels] + [./diff_v] + type = Diffusion + variable = v + [../] + [./coupled_force] + type = CoupledForce + variable = v + v = v2 + [../] + [./coupled_force2] + type = CoupledForce + variable = v + v = v3 + [../] + [./td_v] + type = TimeDerivative + variable = v + [../] +[] + +[BCs] + [./left_v] + type = FunctionDirichletBC + variable = v + boundary = left + function = func + [../] + [./right_v] + type = DirichletBC + variable = v + boundary = right + value = 0 + [../] +[] + +[Functions] + [func] + type = ParsedFunction + value = 'if(t < 2.5, 1, 1 / t)' + [] +[] + +[Postprocessors] + [./picard_its] + type = NumPicardIterations + execute_on = 'initial timestep_end' + [../] + [master_time] + type = Receiver + execute_on = 'timestep_end' + [] + [master_dt] + type = Receiver + execute_on = 'timestep_end' + [] + [time] + type = TimePostprocessor + execute_on = 'timestep_end' + [] + [dt] + type = TimestepSize + execute_on = 'timestep_end' + [] +[] + +[Executioner] + type = Transient + solve_type = PJFNK + petsc_options_iname = '-pc_type -pc_hypre_type' + petsc_options_value = 'hypre boomeramg' + picard_max_its = 2 # deliberately make it fail at 2 to test the time step rejection behavior + nl_rel_tol = 1e-5 # loose enough to force multiple Picard iterations on this example + l_tol = 1e-5 # loose enough to force multiple Picard iterations on this example + picard_rel_tol = 1e-8 + num_steps = 2 +[] + +[MultiApps] + [./sub2] + type = TransientMultiApp + positions = '0 0 0' + input_files = picard_sub2.i + execute_on = timestep_end + [../] +[] + +[Transfers] + [./v_to_v3] + type = MultiAppNearestNodeTransfer + direction = from_multiapp + multi_app = sub2 + source_variable = v + variable = v3 + [../] + [./w] + type = MultiAppNearestNodeTransfer + direction = to_multiapp + multi_app = sub2 + source_variable = w + variable = w + [../] + [time_to_sub] + type = MultiAppPostprocessorTransfer + from_postprocessor = time + to_postprocessor = sub_time + direction = to_multiapp + multi_app = sub2 + [] + [dt_to_sub] + type = MultiAppPostprocessorTransfer + from_postprocessor = dt + to_postprocessor = sub_dt + direction = to_multiapp + multi_app = sub2 + [] + [matser_time_to_sub] + type = MultiAppPostprocessorTransfer + from_postprocessor = time + to_postprocessor = master_time + direction = to_multiapp + multi_app = sub2 + [] + [master_dt_to_sub] + type = MultiAppPostprocessorTransfer + from_postprocessor = dt + to_postprocessor = master_dt + direction = to_multiapp + multi_app = sub2 + [] +[] diff --git a/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/picard_sub2.i b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/picard_sub2.i new file mode 100644 index 000000000000..f320d211a4f1 --- /dev/null +++ b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/picard_sub2.i @@ -0,0 +1,83 @@ +[Mesh] + type = GeneratedMesh + dim = 2 + nx = 10 + ny = 10 +[] + +[Variables] + [./v] + [../] +[] + +[AuxVariables] + [./w] + [../] +[] + +[Kernels] + [./diff_v] + type = Diffusion + variable = v + [../] + [./td_v] + type = TimeDerivative + variable = v + [../] +[] + +[BCs] + [./left_v] + type = DirichletBC + variable = v + boundary = left + value = 1 + [../] + [./right_v] + type = DirichletBC + variable = v + boundary = right + value = 0 + [../] +[] + +[Executioner] + type = Transient + solve_type = PJFNK + petsc_options_iname = '-pc_type -pc_hypre_type' + petsc_options_value = 'hypre boomeramg' + nl_rel_tol = 1e-5 # loose enough to force multiple Picard iterations on this example + l_tol = 1e-5 # loose enough to force multiple Picard iterations on this example + num_steps = 2 +[] + +[Postprocessors] + [master_time] + type = Receiver + execute_on = 'timestep_end' + [] + [master_dt] + type = Receiver + execute_on = 'timestep_end' + [] + [sub_time] + type = Receiver + execute_on = 'timestep_end' + [] + [sub_dt] + type = Receiver + execute_on = 'timestep_end' + [] + [time] + type = TimePostprocessor + execute_on = 'timestep_end' + [] + [dt] + type = TimestepSize + execute_on = 'timestep_end' + [] +[] + +[Outputs] + csv = true +[] diff --git a/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/test_multilevel.py b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/test_multilevel.py new file mode 100644 index 000000000000..3ec9c6ad437b --- /dev/null +++ b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/test_multilevel.py @@ -0,0 +1,35 @@ +#!/usr/bin/env python3 +#* This file is part of the MOOSE framework +#* https://www.mooseframework.org +#* +#* All rights reserved, see COPYRIGHT for full restrictions +#* https://github.com/idaholab/moose/blob/master/COPYRIGHT +#* +#* Licensed under LGPL 2.1, please see LICENSE for details +#* https://www.gnu.org/licenses/lgpl-2.1.html + +import numpy as np +import unittest + +class TestMultiLevel(unittest.TestCase): + def test(self): + data = np.genfromtxt('master_out_sub10_sub20.csv', dtype=float, delimiter=',', names=True) + + # We should have two time steps plus the initial state + self.assertEqual(len(data), 3) + + # master, sub, and sub-sub times should all be equivalent + for i in range(len(data)): + self.assertEqual(data['sub_time'][i], data['master_time'][i]) + self.assertEqual(data['sub_time'][i], data['time'][i]) + + # master, sub, and sub-sub dts should all be equivalent + for i in range(len(data['sub_dt'])): + self.assertEqual(data['sub_dt'][i], data['dt'][i]) + self.assertEqual(data['sub_dt'][i], data['master_dt'][i]) + + # The second timestep definitely shouldn't be equal to dt + self.assertNotEqual(data['time'][2], data['dt'][2]) + +if __name__ == '__main__': + unittest.main(__name__, verbosity=2) diff --git a/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/tests b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/tests new file mode 100644 index 000000000000..9b2ddeef7f4a --- /dev/null +++ b/test/tests/multiapps/picard_multilevel/multilevel_dt_rejection/tests @@ -0,0 +1,20 @@ +[Tests] + issues = '#15166' + design = 'TransientMultiApp.md' + [run] + type = RunApp + input = master.i + allow_warnings = True + expect_out = 'sub1 failed to converge' + max_buffer_size = -1 + requirement = 'The system shall be able to run multiple timesteps of a multi-level multi-app simulation, handling the case when Picard coupling between two levels fails to converge.' + [] + [python] + prereq = 'run' + type = PythonUnitTest + input = 'test_multilevel.py' + test_case = 'TestMultiLevel' + requirement = 'The system shall be able to uniformly cut the time-step across levels of a multi-app solve, even when there is no Picard coupling between two levels.' + required_python_packages = 'numpy' + [] +[]