Skip to content

The Error of running df0DFoam(pytorchIntegrator in CH4)  #314

@WangS195

Description

@WangS195

Thanks to all members of the DeepFlame team for their contributions, it is a very meaningful and interesting effort! I am learning the setup and solving methods of DeepFlame., However, I have encountered some difficulties and would like to ask for your help, for which I am very thankful!
Issue description: When I run the df0DFoam(pytorchIntegrator in CH4) in examples file with CVODE model, All thing is OK, However, When i use the DNN model, the error message will appear in the log.mpirun file as blow:


CanteraError thrown by ThermoPhase::setState_HPorUV (HP):
No convergence in 500 iterations
Target Enthalpy = 1587618.5354423951
Current Pressure = 101325.0
Starting Temperature = 1699.9877983508209
Current Temperature = 1.1837492622930317e-229
Current Enthalpy = 15133764.643765664
Current Delta T = -2.3674985245860633e-229


[2]
[2]
[2] --> FOAM FATAL ERROR:
[2]
[2]
[2] From function [3]
[3]
[3] --> FOAM FATAL ERROR:
[3]
[3]
[3] From function void Foam::dfChemistryModel::correctThermo() [with ThermoType = Foam::basicThermo]
[0]
[0]
[0] --> FOAM FATAL ERROR:
[0]
[0]
[0] From function void Foam::dfChemistryModel::correctThermo() [with ThermoType = Foam::basicThermo]
[0] in file /home/w/deepflame-dev/src/dfChemistryModel/lnInclude/dfChemistryModel.C at line void Foam::dfChemistryModel::correctThermo() [with ThermoType = Foam::basicThermo]
[2] in file /home/w/deepflame-dev/src/dfChemistryModel/lnInclude/dfChemistryModel.C at line 586.
[2]
FOAM parallel run aborting
[2]
586.
[3] in file /home/w/deepflame-dev/src/dfChemistryModel/lnInclude/dfChemistryModel.C at line 586.
[3]
FOAM parallel run aborting
[3]
[1]
[1]
[1] --> FOAM FATAL ERROR:
[1]
[1]
[1] From function void Foam::dfChemistryModel::correctThermo() [with ThermoType = Foam::basicThermo]
[1] in file /home/w/deepflame-dev/src/dfChemistryModel/lnInclude/dfChemistryModel.C at line 586.
[1]
FOAM parallel run aborting
[1]
[0]
FOAM parallel run aborting
[0]
[2] #0[1] #0 Foam::error::printStack(Foam::Ostream&)[3] #0 Foam::error::printStack(Foam::Ostream&)Foam::error::printStack(Foam::Ostream&)[0] #0 Foam::error::printStack(Foam::Ostream&) at ??:?
at ??:?
at ??:?
at ??:?
[2] #1 [1] #1 [3] #[0] #1 Foam::error::abort()Foam::error::abort()Foam::error::abort()1 Foam::error::abort() at ??:?
[1] #2 at ??:?
[2] #2 at ??:?
[0] #2 at ??:?
[3] #2 ???? in "/home/w/deepflame-dev/platforms in "/home/w/deepflame-dev/platforms/linux64GccDPI in "/home/w/deepflame-dev/platforms/linux64GccDPInt32nt32Opt/bin/df0D/linux64GccDPInt32Opt/bin/df0DFoaOpt/bin/df0DFoamm"
[1] #3 Foam"
[3] #3 "
[0] #3 in "/home/w/deepflame-dev/platforms/linux64GccDPInt32Opt/bin/df0DFoam"
[2] #3 ???? in "/home/w/deepflame-dev/platforms/linux64GccDPInt32Opt/bin/df0DFoam"
in "/h[1] #4 __libc_start_mainome/w/deep in "/home/w/deepflame-dev/platforms/linux64GccDPIntflame-dev/platforms/linux64GccDPInt32Opt/bin/df0DFoam"
32Opt/bin/df0DFoam"
[3] #4 __libc_start_main[0] #4 __libc_start_main in "/home/w/deepflame-dev/platforms/linux64GccDPInt32Opt/bin/df0DFoam"
[2] #4 __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
[1] # in "/lib/x86_64-linux-g5 nu/libc.so.6"
[3] # in "/lib/x86_645 -linux-gnu/libc.so.6"
[0] #5 in "/lib/x86_64-linux-gnu/libc.so.6"
[2] #5 ???? in "/home/w/deepflame-dev/platforms/linux64GccDPInt32Opt/bin/df0DFoam"
in "/home/w/deepflame-dev/platforms/linux64GccDPInt32Opt/bin/df0DFoam"
in "/home/w/deepflame-dev/platforms/linux64GccDPInt32Opt/bin in "/home/w/deepflame-dev/platforms/linux64GccDPInt32Opt/bin/df0DFoam"/df0DFoam"


MPI_ABORT was invoked on rank 3 in communicator MPI COMMUNICATOR 3 SPLIT FROM 0
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

PMIX ERROR: UNREACHABLE in file ../../../src/server/pmix_server.c at line 2193
PMIX ERROR: UNREACHABLE in file ../../../src/server/pmix_server.c at line 2193
3 more processes have sent help message help-mpi-api.txt / mpi-abort
Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages


I download the DNN model and copy the "HE04_Hydrogen_ESH2_GMS_sub_20221101" into the "/deepflame-dev/mechanisms", and then i turn on the torchSetting as below:


TorchSettings
{
torch on;
GPU off;
log on;
torchModel "HE04_Hydrogen_ESH2_GMS_sub_20221101";
coresPerNode 4;


**When I ./Allrun, the program will be terminated and show the above error. Could you help me solve this problem? and could you help me point out what/where the problem is? Thank you a lot!

By the way, when I run the df0DFoam/H2/pytorchIntegrator/ case , All thing is OK.**

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions