Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flowfield is not "clean" in 3D test case #20

Open
eeethon opened this issue Nov 22, 2021 · 12 comments
Open

Flowfield is not "clean" in 3D test case #20

eeethon opened this issue Nov 22, 2021 · 12 comments

Comments

@eeethon
Copy link

eeethon commented Nov 22, 2021

Hey guys. Recently I run test cases of this code, and I meet some problems. When I run 2D cases, the results are qualitatively OK. However, for 3D cases, e.g, "ONERA M6", I found that the flowfield is qualitatively abnormal. There exist some zones with abnormal variables far away from the wing, where the flow should be close to inflow variables. The test cases were downloaded from link given in "README_TESTS.txt", and were run directly with the given mesh and configured file. I wonder if there are some problems in freestream preserving or boundary conditions? Does someone give some ideas?

@TakisCFD
Copy link
Collaborator

I have just tried them now without any issues.
Please make sure that the directory that you run the code, contains only the newly compiled for your system ucn3d_p, UCNS3D.DAT and the GRID.* files.
otherwise you are using the existing RESTART.dat files with the wrong flags in the UCNS3D.DAT file.

@eeethon
Copy link
Author

eeethon commented Nov 22, 2021

Yes, I know the effect of restart file, and every time I run in a directory containing only the grid file and configure file. I run the case of DDES of a cylinder, and the result is normal. Then I run DDES of ONERA M6, and the flowfield looks normal. I wonder if there are some errors in parameters in configure file in original test cases.

@TakisCFD
Copy link
Collaborator

I have rerun both DDES and ONERA test problem with the provided DAT files, without any issues. So it is not clear what could be the root of your problem

@eeethon
Copy link
Author

eeethon commented Nov 26, 2021

The test case is downloaded from "https://doi.org/10.5281/zenodo.3375432", and the ONERA M6 that I tested is located in "TEST/3D/RANS/ONERA_M6". I confirm that the code runs with the DAT file in this directory to get a non-physical result(strange high speed flow region under the wing ), but when I change the "CODE CONFIGURATION" from default value 0 (the DAT file in ONERA M6 case sets 0 ) to 9, the result looks normal. The problem occurs when I compile the code and simply run the ONERA case without any modification of the original DAT file in that directory. So I wonder if you modified something of the original DAT file in "TEST/3D/RANS/ONERA_M6". Now I think I solve the problem, but I think there is a parameter error in the DAT of the ONERA M6 case, and one cannot get a physical result when simply running.

@SRkumar97
Copy link

Hello,
Although this thread is a bit old, I'm facing a similar issue with the present version of the testcase.
I am trying to run the ONERA M6 3D RANS case from the TESTS directory.

First of all, I replaced the existing ucns3d_p executable in the test folder with the compiled executable from the src folder. Here, I ran the case in 2 ways -

  1. Keeping all the files as is in the ONERA M6 folder, but replacing only the executable, I ran the case. However errors with messages "array index out of bounds" and "SIGSEGV" both appear, despite the fact that memory access was set to unlimited before beginning the run. (ulimit -s unlimited)

  2. Given that for running a case, only the ucns3d_p, UCNS3D.DAT and the grid.msh (or) grid.bin are needed, I retained only these 3 files and ran the case. I am facing only one error message this time - the array index out of bounds. Here too, I did set memory access to unlimited like in the above case.

It's not clear as to why this case won't run.

Attached are screenshots of the errors in both these runs.
case 1
error_run1

case 2
error_run2

@TakisCFD
Copy link
Collaborator

make sure that you have only three files grid.msh, UCNS3D.DAT and your ucns3d_p.
You can attach your UCNS3D.DAT, but have not encountered any issues above with the current version of the code.

@SRkumar97
Copy link

Thanks, yes I ensured to keep only these 3 files before beginning the run, when I tried second time. The array index out of bounds error appears when I run with these 3 files.

Attached is the UCNS3D.DAT which I used - attaching it as a notepad text file due to GitHub limitations on upload type.

UCNS3D_DAT.txt

@SRkumar97
Copy link

SRkumar97 commented Jun 5, 2023

Hello sir,
I tried running the same case in an HPC cluster - same Intel Fortran and MKL libraries were used. The case was run with np=32
When run in the cluster, the residual.dat file gets generated for 4530 iterations, in intervals of 10 iterations.

But it terminates at this point, with almost the same error messages - this time there are no array bound problems, but the simulation gets killed somehow towards this 4530 iteration. I had even changed wall clock time to a much higher value than default 82000 seconds.

RESTARTING 0 0.000000000000000E+000 0

                    ParMETIS Initiated                               


                    ParMETIS Operation Completed                     

UCNS3D Running

BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
RANK 0 PID 2415383 RUNNING AT scn5-10g
KILLED BY SIGNAL: 9 (Killed)

The above 3 lines repeat for every proc. in the parallel job.

@TakisCFD
Copy link
Collaborator

TakisCFD commented Jun 8, 2023

don't copy the entire output message from the code with large fonts for better readability.
Did you start your simulation from scratch? (no restart files etc?), what settings did you use, which mesh?

Please attach the UCNS3D.DAT file used and the history.txt file

@SRkumar97
Copy link

Sorry for that mishap.

Yes, I ran the case with just 3 files in the directory - the grid.msh, ucns3d_p and UCNS3D.DAT files.

No history/restart files in the directory when run was initiated.

Attached is the UCNS3D.DAT file for your reference. The history.txt file is attached. It got generated automatically during the run.
history.txt
UCNS3D.TXT

Thanks

@SRkumar97
Copy link

Hi! Can someone please help with this thread?
I am facing the same error of termination with exit code 9 - but for a flat plate boundary layer iLES case, when run in a cluster.

The issue occurs when np exceeds a count of 66 or 67, out of the 128 cores in the node (Single thread run). It is very weird, as to why this issue of code termination would occur just beyond a certain np.

However, I am able to utilize all 128 procs or cores of the single node, when I use a much coarser mesh for the simulation.

Any insights, or advise on how to ensure the fine mesh case also runs smoothly with all 128 procs?

@TakisCFD
Copy link
Collaborator

TakisCFD commented Sep 4, 2023

You need to have clear specification of thread placement over cores, otherwise when using more than 64 if the threads placement is not by default correctly placed in your cluster they might be assigned in one socket (possibly overpopulating the cpu and eventually running out of memory).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants