Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reach successfull XSuite <-> FLUKA exchange #31

Merged
merged 14 commits into from
Sep 18, 2023

Conversation

ghugo83
Copy link

@ghugo83 ghugo83 commented Sep 5, 2023

This PR includes the following fixes:

  • fluka_init_max_uid should also be called from XSuite, like in SixTrack. Otherwise, the server complains (from fluka_coupling/fluka/source_pre.f), and breaks the connection.
  • pyfluka_set_synch_part should also be called from XSuite, like in SixTrack. Otherwise, the server complains (from fluka_coupling/fluka/source_pre.f), and breaks the connection.
  • The reference particle info should be set, like in SixTrack. For example, it is now passed from xpart. Otherwise, the magnetic rigidity is accidentally null, hence all particles are cut: 0 particle received.
  • Necessary to set a timeout value (at least on my local install). Otherwise, was getting a timeout error in FLUKA output file (lhc_run3_30cm001.out​).
  • The I/O buffers from Python / C / Fortran were not in sync. Hence, printouts were spicing the debugging up: they were not in the order of actual runtime. I added a few flushes, this is definitely needed for debugging (for production, can just further be wrapped into a debugging mode).
  • Shift of 1 in particle indexing, for partID and parentID. Lead to crash in FLUKA runs, from mgdraw.f.
  • Shift of 1 in turn index.

Less important:

  • Had a crash with python version check.
  • Used npart instead of n_alloc as a parameter to pyfluka_init_max_uid (like in SixTrack). Under the hood, pyfluka_init_max_uid only sets a particle id: the min id for new particles to start from (no memory allocation). While looking at the received particles, this avoids having the new particles indices starting from e.g. 50000 (default n_alloc): instead, they simply start from npart + 1.
  • Instead, kept n_alloc (instead of npart) for fluka_mod_init. Here, the situation is different, as under the hood, there is memory allocation (I presume this was done to avoid having to recompile, after changing the number of particles in example.py).
  • Also set the e0​ and eof​ etc (mod_common), like is done in SixTrack (otherwise, they stay at their default value 0).
  • Tried to keep the implementation as close as possible to the SixTrack one for now, to be able to easily compare printouts / results.

To be investigated for mental sanity:

  • Check whether the collimator index has a shift by 1 or not.
  • Not fully confident on the floating point precision used here and there.
  • Quickly oversaw that the spin passed from SixTrack to FLUKA for protons is 0 (at least at the level of the connection, hence the info eventually received by FLUKA).
  • The value for proton mass0 in XSuite is slightly different than the one in SixTrack (at least in fluka.log). SixTrack seems to eventually use a third value (taken from initial.dat).

Perspective:

  • Run exactly the same particles distribution as SixTrack, and compare the lhc_run3_30cm001.out. Look at the results.
  • Extend to multi-turn.
  • Ideally, one could even try to get identical results. But not fully trivial: precision discrepancies from the different languages + have to set the FLUKA starting random engine state.

Should make @freddieknets happy.
Still a lot of further debugging to be done down the line.

…0 for both sending particles and receiving particles. Desactivate server launch from xsuite (debugging purposes) + Uncomment functions from SixTrack and fix related types in common_modules + Modifications in pyfluka leading to solve the issues (follow what is done in SixTrack main_cr)
…thon engine is shutting down, but explicitely from example.py. This allows a complete and clean server & connection closure. + The message on "did you compile?" is only relevant the first time pyfluka is imported.
…s set in main_cr.f90). Instead, in XSuite, these variables were not set, leading to erroneous BRH0 (0), leading to all particles to be cut. Can now receive particles. TO DO: while those values in common_modules.f90 are not presently used, they will be used when the kernel_element in mod_fluka.f90 are uncommented, so maybe should set them in mod_common as well, instead of only passing them as parameters. TO DO 2: The next crash to explore happens with total particles >= 500, segfault in mgdraw.
…and from 1 in FlukaIO. Hence, was wrongly passing a 0 index as particle/parentId, hence issue with IDPIDP and IDPGEN (from mdstck.f) in mgdraw.f. Now sending / reception fully fly with even 20 000 particles.
…and from 1 in FlukaIO. Hence, was wrongly passing a 0 index as particle/parentId, hence issue with IDPIDP and IDPGEN (from mdstck.f) in mgdraw.f
…mory allocation done under the hood, contrary to fluka_mod_init). This allows to have the starting id of the received particles sparting from npart + 1, instead of inelegantly n_alloc + 1
@freddieknets freddieknets merged commit 2f2863f into xsuite:FLUKA Sep 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants