-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mpirun InductionHeating examples #10
Comments
Regarding the case_layer-conductivity.sif: Output without mpi:
Output with mpi:
|
Hi, this might not have been properly parallelized yet as it is very new feature. So this is what you get from partition 0. If you set Simulation::Max Output Partition = 32, for example, you should be able to see if something is created in other partitions. |
Hi Peter, indeed, the IntersectionBCs are allocated correctly
Nevertheless, I think there is a problem with this feature as I don't see any current in the result. I just attach the complete log, maybe you can read something from it: layercond_mpi_fulloutput.log Have there been any recent changes to that function? I used a solver from October 7th to run the case. |
Hi @raback, I tried to run the case again with a newer solver. The problem persists but I get a more elaborate error message:
Here are the complete logs: layercond_mpi_allranks.log Do you have any idea how I could fix that problem? EDIT: I use a solver build from this commit: ElmerCSC/elmerfem@d3b6930 |
I posted this issue here: http://www.elmerfem.org/forum/viewtopic.php?t=7916 |
I seem to have issues with this even in serial. Does it work for you? Trying to narrow down where it broke down. Should add the feature set to the tests. |
You're right, I didn't try that again. With an older version (ElmerCSC/elmerfem@0ff29c7, https://github.com/nemocrys/opencgs/blob/main/Docker/Dockerfile) it works well in serial but fails with mpi (as reported above: #10 (comment)). |
Update: I tried again with ElmerCSC/elmerfem@893863e (13.03.2023)
|
Hi
Nice!
I also tried these:
* >case_layer-conductivity.sif: gives trivial result with mpi (output here: [ https://github.com/ElmerCSC/elmer-elmag/files/11001715/layer-cond.log | layer-cond.log ] ) but works without mpi
Seems something not implemented for parallel usage ? Or maybe just buggy ?
* case_circuit.sif: does not converge (neither with / without mpi)
These worked for me in the MGDynamics-Solver:
Linear System Complex = Logical True
Linear System Solver = Iterative
Linear System Iterative Method = GCR
Linear System ILU Order = 0
Linear System preconditioning = Circuit
Linear System Convergence Tolerance = 1.e-6
Linear System Max Iterations = 6000
Linear System Residual Output = 10
both in serial and in 4 task-mpi run.
Br, Juha
|
Linear System Iterative Method = GCR
"BiCGstabL" also works, and is faster.
|
I believe that in the plain circuit case the only issue was Linear System Complex = Logical False which should be Linear System Complex = Logical True The harmonic cases on the CircuitBuilder examples tend to have this because this applies to MacOS (which is how I built the examples). If you're working on Linux, you should set it to Linear System Complex = Logical True. Somehow on MacOS you still get the results for the imaginary part. I don't remember the reason...it's been a long time since I created those examples. However, I just tested the harmonic 3D open coil example and I that switch needs to be kept as False on MacOS and the results are correct. BR, Jon |
Hi Jonathan 😀WellEnabling the incomplete LU on top of "Circuit"-preconditioner reallymade the case converge.But sure, we should use complex iterators for complex valued systems whenever possible.Br, Juha
|
About the complex vs real iterators applied to complex valued systems: I did, way back when, arrange the linear system within elmersolver such that it can be thrown to both complex and real valued linear system solvers - not all solvers have complex arithmetic versions - notably most of the "direct" solvers. The complex iterative solvers just are better at solving the complex valued systems than those using real arithmetic, in terms of number of iterations or at times even in success in solving.Nice weekend everybody!
|
Amazing, good job! You are the man Juha! Have a great weekend! |
Linear System ILU Order = 0 I'd recommend still adding the former of the above lines (not much help otherwise) + you can then change the convergene tolerance stricter, like in the latter line, and still have results in reasonable number of iterations (~100 iterations or so). |
Thanks a lot for your comments! I changed the settings for case_circuits.sif, see #14. We've now got
@juharu I don't know what's the reason for the case_layer-conductivity to fail with mpi. I tried the test cases for the impedance BC with mpi, works well. Would it make sense to try different solver settings? |
Hi Arved
Thanks for all the good work!
Would it make sense to try different solver settings?
I don't think so, to me it seems that this just isn't implemented in parallel (I might be wrong of course).
Some of us should have a look, I guess.
Br, Juha
|
I just commited to github "devel" a few small fixes, that seemingly allow the "case_layer_conductivity.sif" pass also in parallel. |
Also "case_coil-solver.sif" speeds up considerably with the above change. As propably does "case_scaled-conductivity.sif", |
Thanks a lot, Juha! I just compiled & tried again, works well also on my computer :) |
I just tried to run the InductionHeating cases with mpi on 4 cores:
@raback are there any known limitations of the applied solvers?
The text was updated successfully, but these errors were encountered: