-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix heft generation after alphas PR: more fundamentally, remove reading of param_card.dat? #439
Comments
… does not build yet, see madgraph5#439 I will merge this anyway as standalone cudacpp for SM physics works fine (there is one exception, uudd fails, see madgraph5#440 - I will fix that a posteriori). Note also that alphas from madevent are still not integrated, and so ggtt.mad fails to build for instance, madgraph5#441. This will be the next big thing.
I clarify my own comment and change the title: the parameters come from param_card.dat, not from run_card.dat. In generating ggtt.mad I launch the following steps, which normally are part of 'cd Source; make':
The fact that I do this at 'generation' time is only in my own scripts, because I want to commit to the repo some ggtt.mad code that includes all the code. For a normal end user, I do not suggest to change the way of working: the paramcard.inc (Fortran) would be created when doing make from Source, What I am suggesting here is that also in the cudacpp plugin we should have a similar mechanism: as part of the make process in the src directory, we should have a mechanism to create param_card.h (and run_card.h?) which are pieces of code that we can then use inside cpp/cuda. This would remove the need to read the param_card.dat file at runtime. Note incidentally that the .inc created for Fortran are not good for c++, because I would prefer to hav ea c++ style 'constexpr <param_name> = <param_value>;', while the Fortran.inc is only '<param_name> = <param_value>' (and even the format of the value may be Fortran specific (is D valid in C, I do not remember?)
should become
(or maybe with fptype instead of double). |
Incidentally again note that the constexpr above ALREDAY IS in the cudacpp generated code
However:
Note incidentally that removing the runtime reading would remove all complications about locating the param_card.dat file itself, see issue #194 about MG5AMC_CARD_PATH) |
Incidentally again, the approach I suggest here would solve yet another (real? imagined?) problem,
In the parameter calculations, an mdl_complexi was used in the past; after the alphas merge, I hardcoded this to (0,1). I have no idea if peculiar EFT models assume a value of complex i which is not (0,1) as an artefact of the model. I find it unlikely, but maybe it is a parametrization? Anyway, in th eapproach above the mdl_complexi, like all other parameters, could be coming from the param_card.h that ultimately comes from param_card.dat. (I do not see any way to change complexi in https://github.com/madgraph5/madgraph4gpu/blob/605291c3f527c3d1380f56fe96287ef401e23a41/epochX/cudacpp/gg_tt.mad/Cards/param_card.dat, anyway) |
So here we can really assum that mdl_complexi will always be cxmake(0,1) |
…ixes the build (ONLY for HRDCOD=1) madgraph5#439 The build without HRDCOD=1 however i sstill failing
Hi Olivier, thanks, for completeness I checked the code, this comes from models/import_ufo.py
So it is indeed hardcoded, I will hardcoded in a better way (in a better place) for cuda/cpp then. |
… - ok but large diferences in performance
…oat vectors ONLY TO NON-SM PROCESSES
I am about to merge PR #446 that will provide a first partial (but functioning) patch for EFT models with running alphas. A few comments about what this patch does, with respect to some of my earlier comments above
Anyway, the immediate issue of this #439, namely that EFT was not at all working with running alphas, is now fixed (in on e particular case), and I will merge it. As for the next steps, rather than continue here, I will open a new issue which will hopefully clarify a bit better what is remaining - and in any case will be lower priority. |
The default HRDCOD=0 build presently fails ccache g++ -O3 -std=c++17 -I. -fPIC -Wall -Wshadow -Wextra -ffast-math -fopenmp -march=skylake-avx512 -mprefer-vector-width=256 -DMGONGPU_FPTYPE_DOUBLE -DMGONGPU_FPTYPE2_DOUBLE -c Parameters_MSSM_SLHA2.cc -o Parameters_MSSM_SLHA2.o In file included from Parameters_MSSM_SLHA2.cc:8: Parameters_MSSM_SLHA2.h:19:2: error: #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" 19 | #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" | ^~~~~ In file included from Parameters_MSSM_SLHA2.cc:8: Parameters_MSSM_SLHA2.h: In function ‘const Parameters_MSSM_SLHA2_dependentCouplings::DependentCouplings_sv Parameters_MSSM_SLHA2_dependentCouplings::computeDependentCouplings_fromG(const fptype_sv&)’: Parameters_MSSM_SLHA2.h:806:56: error: conversion from ‘fptype_sv’ {aka ‘__vector(4) double’} to non-scalar type ‘const mgOnGpu::cxsmpl<double>’ requested 806 | constexpr cxsmpl<double> mdl_G__exp__2 = ( ( G ) * ( G ) ); | ~~~~~~~~^~~~~~~~~ Parameters_MSSM_SLHA2.h:809:31: error: ‘mdl_I51x11’ was not declared in this scope 809 | out.GC_51 = -( cI * G * mdl_I51x11 ); | ^~~~~~~~~~ Parameters_MSSM_SLHA2.cc: In member function ‘void Parameters_MSSM_SLHA2::setIndependentParameters(SLHAReader&)’: Parameters_MSSM_SLHA2.cc:67:3: error: ‘indices’ was not declared in this scope 67 | indices[0] = 3; | ^~~~~~~ make[1]: *** [cudacpp_src.mk:236: Parameters_MSSM_SLHA2.o] Error 1 The non-default HRDCOD=1 however also fails, the first error being ccache g++ -O3 -std=c++17 -I. -fPIC -Wall -Wshadow -Wextra -ffast-math -fopenmp -march=skylake-avx512 -mprefer-vector-width=256 -DMGONGPU_FPTYPE_DOUBLE -DMGONGPU_FPTYPE2_DOUBLE -DMGONGPU_HARDCODE_PARAM -c Parameters_MSSM_SLHA2.cc -o Parameters_MSSM_SLHA2.o In file included from Parameters_MSSM_SLHA2.cc:8: Parameters_MSSM_SLHA2.h:380:51: error: call to non-‘constexpr’ function ‘mgOnGpu::cxsmpl<FP> mgOnGpu::conj(const mgOnGpu::cxsmpl<FP>&) [with FP = double]’ 380 | constexpr cxsmpl<double> mdl_conjg__yu3x3 = conj( mdl_yu3x3 ); | ~~~~^~~~~~~~~~~~~ In file included from Parameters_MSSM_SLHA2.h:13, from Parameters_MSSM_SLHA2.cc:8:
…succeeds but the builds fails In make HRDCOD=0 (default) Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:19:2: error: #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" 19 | #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" | ^~~~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.cc:33:47: error: exponent has no digits 33 | mdl_WH = slha.get_block_entry( "decay", 25, 4.070000e - 03 ); | ^~~~~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:535:43: error: ‘aS’ was not declared in this scope 535 | const fptype_sv mdl_gHgg2 = ( -7. * aS ) / ( 720. * M_PI ); | ^~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:541:64: error: conversion from ‘fptype_sv’ {aka ‘__vector(4) double’} to non-scalar type ‘const mgOnGpu::cxsmpl<double>’ requested 541 | constexpr cxsmpl<double> mdl_G__exp__3 = ( ( G ) * ( G ) * ( G ) ); | ~~~~~~~~~~~~~~~~^~~~~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:542:33: error: ‘mdl_WH’ was not declared in this scope; did you mean ‘mdl_dWH’? 542 | const fptype_sv mdl_dWH = mdl_WH * ( -0.24161 * mdl_dGf + 0.96644 * mdl_dgw + 0.4832199999999999 * mdl_dkH - 0.11186509426655467 * mdl_dWW + ( 0.36410378449238195 * mdl_cHj3 * mdl_vevhat__exp__2 ) / mdl_LambdaSMEFT__exp__2 + ( 0.17608307708657747 * mdl_cHl3 * mdl_vevhat__exp__2 ) / mdl_LambdaSMEFT__exp__2 + ( 0.1636 * mdl_cHG * mdl_MT__exp__2 * mdl_vevhat__exp__2 ) / ( mdl_LambdaSMEFT__exp__2 * ( -0.5 * mdl_gHgg2 * mdl_MH__exp__2 + mdl_gHgg1 * mdl_MT__exp__2 ) ) + ( mdl_cHW * ( -0.35937785117066967 * mdl_gHaa * mdl_gHza + 0.006164 * mdl_cth * mdl_gHaa * mdl_sth + 0.00454 * mdl_gHza * mdl_sth__exp__2 ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + ( mdl_cHWB * ( -0.00454 * mdl_cth * mdl_gHza * mdl_sth + mdl_gHaa * ( -0.0030819999999999997 + 0.006163999999999999 * mdl_sth__exp__2 ) ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + ( mdl_cHB * ( -0.006163999999999999 * mdl_cth * mdl_gHaa * mdl_sth - 0.00454 * mdl_gHza * ( -1. + mdl_sth__exp__2 ) ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + mdl_dWHc + mdl_dWHb + mdl_dWHta ); | ^~~~~~ | mdl_dWH ... In make HRDCOD=1 Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:118:29: error: exponent has no digits 118 | constexpr double mdl_WH = 4.070000e - 03; | ^~~~~~~~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:381:56: in ‘constexpr’ expansion of ‘Parameters_SMEFTsim_topU3l_MwScheme_UFO::constexpr_pow(2.0e+0, 2.5e-1)’ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:110:5: error: call to non-‘constexpr’ function ‘void __assert_fail(const char*, const char*, unsigned int, const char*)’ 110 | assert( static_cast<double>( iexp ) == exp ); // NB would fail at compile time with "error: call to non-‘constexpr’ function ‘void __assert_fail'" | ^~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:385:72: error: ‘ABS’ was not declared in this scope 385 | constexpr double mdl_propCorr = ABS( mdl_linearPropCorrections ) / ( ABS( mdl_linearPropCorrections ) + mdl_nb__10__exp___m_40 ); | ^~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:398:49: error: call to non-‘constexpr’ function ‘mgOnGpu::cxsmpl<FP> mgOnGpu::conj(const mgOnGpu::cxsmpl<FP>&) [with FP = double]’ 398 | constexpr cxsmpl<double> mdl_conjg__cbH = conj( mdl_cbH ); | ~~~~^~~~~~~~~~~ ...
… succeeds but the builds fails In make HRDCOD=0 (default) Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:19:2: error: #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" 19 | #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" | ^~~~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.cc:33:47: error: exponent has no digits 33 | mdl_WH = slha.get_block_entry( "decay", 25, 4.070000e - 03 ); | ^~~~~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:535:43: error: ‘aS’ was not declared in this scope 535 | const fptype_sv mdl_gHgg2 = ( -7. * aS ) / ( 720. * M_PI ); | ^~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:541:64: error: conversion from ‘fptype_sv’ {aka ‘__vector(4) double’} to non-scalar type ‘const mgOnGpu::cxsmpl<double>’ requested 541 | constexpr cxsmpl<double> mdl_G__exp__3 = ( ( G ) * ( G ) * ( G ) ); | ~~~~~~~~~~~~~~~~^~~~~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:542:33: error: ‘mdl_WH’ was not declared in this scope; did you mean ‘mdl_dWH’? 542 | const fptype_sv mdl_dWH = mdl_WH * ( -0.24161 * mdl_dGf + 0.96644 * mdl_dgw + 0.4832199999999999 * mdl_dkH - 0.11186509426655467 * mdl_dWW + ( 0.36410378449238195 * mdl_cHj3 * mdl_vevhat__exp__2 ) / mdl_LambdaSMEFT__exp__2 + ( 0.17608307708657747 * mdl_cHl3 * mdl_vevhat__exp__2 ) / mdl_LambdaSMEFT__exp__2 + ( 0.1636 * mdl_cHG * mdl_MT__exp__2 * mdl_vevhat__exp__2 ) / ( mdl_LambdaSMEFT__exp__2 * ( -0.5 * mdl_gHgg2 * mdl_MH__exp__2 + mdl_gHgg1 * mdl_MT__exp__2 ) ) + ( mdl_cHW * ( -0.35937785117066967 * mdl_gHaa * mdl_gHza + 0.006164 * mdl_cth * mdl_gHaa * mdl_sth + 0.00454 * mdl_gHza * mdl_sth__exp__2 ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + ( mdl_cHWB * ( -0.00454 * mdl_cth * mdl_gHza * mdl_sth + mdl_gHaa * ( -0.0030819999999999997 + 0.006163999999999999 * mdl_sth__exp__2 ) ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + ( mdl_cHB * ( -0.006163999999999999 * mdl_cth * mdl_gHaa * mdl_sth - 0.00454 * mdl_gHza * ( -1. + mdl_sth__exp__2 ) ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + mdl_dWHc + mdl_dWHb + mdl_dWHta ); | ^~~~~~ | mdl_dWH ... In make HRDCOD=1 Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:118:29: error: exponent has no digits 118 | constexpr double mdl_WH = 4.070000e - 03; | ^~~~~~~~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:381:56: in ‘constexpr’ expansion of ‘Parameters_SMEFTsim_topU3l_MwScheme_UFO::constexpr_pow(2.0e+0, 2.5e-1)’ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:110:5: error: call to non-‘constexpr’ function ‘void __assert_fail(const char*, const char*, unsigned int, const char*)’ 110 | assert( static_cast<double>( iexp ) == exp ); // NB would fail at compile time with "error: call to non-‘constexpr’ function ‘void __assert_fail'" | ^~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:385:72: error: ‘ABS’ was not declared in this scope 385 | constexpr double mdl_propCorr = ABS( mdl_linearPropCorrections ) / ( ABS( mdl_linearPropCorrections ) + mdl_nb__10__exp___m_40 ); | ^~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:398:49: error: call to non-‘constexpr’ function ‘mgOnGpu::cxsmpl<FP> mgOnGpu::conj(const mgOnGpu::cxsmpl<FP>&) [with FP = double]’ 398 | constexpr cxsmpl<double> mdl_conjg__cbH = conj( mdl_cbH ); | ~~~~^~~~~~~~~~~ ...
… succeeds but the builds fails In make HRDCOD=0 (default) Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:19:2: error: #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" 19 | #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" | ^~~~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.cc:33:47: error: exponent has no digits 33 | mdl_WH = slha.get_block_entry( "decay", 25, 4.070000e - 03 ); | ^~~~~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:535:43: error: ‘aS’ was not declared in this scope 535 | const fptype_sv mdl_gHgg2 = ( -7. * aS ) / ( 720. * M_PI ); | ^~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:541:64: error: conversion from ‘fptype_sv’ {aka ‘__vector(4) double’} to non-scalar type ‘const mgOnGpu::cxsmpl<double>’ requested 541 | constexpr cxsmpl<double> mdl_G__exp__3 = ( ( G ) * ( G ) * ( G ) ); | ~~~~~~~~~~~~~~~~^~~~~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:542:33: error: ‘mdl_WH’ was not declared in this scope; did you mean ‘mdl_dWH’? 542 | const fptype_sv mdl_dWH = mdl_WH * ( -0.24161 * mdl_dGf + 0.96644 * mdl_dgw + 0.4832199999999999 * mdl_dkH - 0.11186509426655467 * mdl_dWW + ( 0.36410378449238195 * mdl_cHj3 * mdl_vevhat__exp__2 ) / mdl_LambdaSMEFT__exp__2 + ( 0.17608307708657747 * mdl_cHl3 * mdl_vevhat__exp__2 ) / mdl_LambdaSMEFT__exp__2 + ( 0.1636 * mdl_cHG * mdl_MT__exp__2 * mdl_vevhat__exp__2 ) / ( mdl_LambdaSMEFT__exp__2 * ( -0.5 * mdl_gHgg2 * mdl_MH__exp__2 + mdl_gHgg1 * mdl_MT__exp__2 ) ) + ( mdl_cHW * ( -0.35937785117066967 * mdl_gHaa * mdl_gHza + 0.006164 * mdl_cth * mdl_gHaa * mdl_sth + 0.00454 * mdl_gHza * mdl_sth__exp__2 ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + ( mdl_cHWB * ( -0.00454 * mdl_cth * mdl_gHza * mdl_sth + mdl_gHaa * ( -0.0030819999999999997 + 0.006163999999999999 * mdl_sth__exp__2 ) ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + ( mdl_cHB * ( -0.006163999999999999 * mdl_cth * mdl_gHaa * mdl_sth - 0.00454 * mdl_gHza * ( -1. + mdl_sth__exp__2 ) ) * mdl_vevhat__exp__2 ) / ( mdl_gHaa * mdl_gHza * mdl_LambdaSMEFT__exp__2 ) + mdl_dWHc + mdl_dWHb + mdl_dWHta ); | ^~~~~~ | mdl_dWH ... In make HRDCOD=1 Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:118:29: error: exponent has no digits 118 | constexpr double mdl_WH = 4.070000e - 03; | ^~~~~~~~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:381:56: in ‘constexpr’ expansion of ‘Parameters_SMEFTsim_topU3l_MwScheme_UFO::constexpr_pow(2.0e+0, 2.5e-1)’ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:110:5: error: call to non-‘constexpr’ function ‘void __assert_fail(const char*, const char*, unsigned int, const char*)’ 110 | assert( static_cast<double>( iexp ) == exp ); // NB would fail at compile time with "error: call to non-‘constexpr’ function ‘void __assert_fail'" | ^~~~~~ ... Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:385:72: error: ‘ABS’ was not declared in this scope 385 | constexpr double mdl_propCorr = ABS( mdl_linearPropCorrections ) / ( ABS( mdl_linearPropCorrections ) + mdl_nb__10__exp___m_40 ); | ^~~ Parameters_SMEFTsim_topU3l_MwScheme_UFO.h:398:49: error: call to non-‘constexpr’ function ‘mgOnGpu::cxsmpl<FP> mgOnGpu::conj(const mgOnGpu::cxsmpl<FP>&) [with FP = double]’ 398 | constexpr cxsmpl<double> mdl_conjg__cbH = conj( mdl_cbH ); | ~~~~^~~~~~~~~~~ ...
…he fixes for madgraph5#701 Now launching fails with a new build error (in cuda) HRDCOD=1 tlau/lauX.sh -CPP nobm_pp_ttW ccache /usr/local/cuda-12.0/bin/nvcc -Xcompiler -fPIC -c -x cu Parameters_sm_no_b_mass.cc -o Parameters_sm_no_b_mass_cu.o In file included from Parameters_sm_no_b_mass.cc:15: Parameters_sm_no_b_mass.h:26:2: error: #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" 26 | #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" | ^~~~~ Since I want to use CPP only, I retry disabling also CUDA: CUDA_HOME=none HRDCOD=1 tlau/lauX.sh -CPP nobm_pp_ttW And... this fixes the IEEE division by zero, but unfortunately it still finds other IEEE exceptions! INFO: Running Survey Creating Jobs Working on SubProcesses INFO: P1_gu_ttxwpd INFO: Building madevent in madevent_interface.py with 'CPP' matrix elements INFO: P1_gd_ttxwmu Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL In summary: the IEEE_DIVIDE_BY_ZERO part of madgraph5#701 has been fixed, but not the other FPEs... There are THREE IEEE FPEs still pending in pp_ttW.mad IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
…he fixes for madgraph5#701 Now launching fails with a new build error (in cuda) (this was later filed as madgraph5#730 and fixed in a later commit of branch fpe) HRDCOD=1 tlau/lauX.sh -CPP nobm_pp_ttW ccache /usr/local/cuda-12.0/bin/nvcc -Xcompiler -fPIC -c -x cu Parameters_sm_no_b_mass.cc -o Parameters_sm_no_b_mass_cu.o In file included from Parameters_sm_no_b_mass.cc:15: Parameters_sm_no_b_mass.h:26:2: error: #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" 26 | #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" | ^~~~~ Since I want to use CPP only, I retry disabling also CUDA: CUDA_HOME=none HRDCOD=1 tlau/lauX.sh -CPP nobm_pp_ttW And... this fixes the IEEE division by zero, but unfortunately it still finds other IEEE exceptions! INFO: Running Survey Creating Jobs Working on SubProcesses INFO: P1_gu_ttxwpd INFO: Building madevent in madevent_interface.py with 'CPP' matrix elements INFO: P1_gd_ttxwmu Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL In summary: the IEEE_DIVIDE_BY_ZERO part of madgraph5#701 has been fixed, but not the other FPEs... There are THREE IEEE FPEs still pending in pp_ttW.mad IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
… does not build yet, see madgraph5/madgraph4gpu#439 I will merge this anyway as standalone cudacpp for SM physics works fine (there is one exception, uudd fails, see madgraph5/madgraph4gpu#440 - I will fix that a posteriori). Note also that alphas from madevent are still not integrated, and so ggtt.mad fails to build for instance, madgraph5/madgraph4gpu#441. This will be the next big thing.
…andling of float vectors ONLY TO NON-SM PROCESSES
…pply old changes on top 'make HRDCOD=0' fails with ccache /cvmfs/sft.cern.ch/lcg/releases/gcc/12.1.0-57c96/x86_64-centos9/bin/g++ -O3 -std=c++17 -I. -fPIC -Wall -Wshadow -Wextra -ffast-math -fopenmp -march=skylake-avx512 -mprefer-vector-width=256 -DMGONGPU_FPTYPE_DOUBLE -DMGONGPU_FPTYPE2_DOUBLE -fPIC -c Parameters_MSSM_SLHA2.cc -o Parameters_MSSM_SLHA2.o In file included from Parameters_MSSM_SLHA2.cc:15: Parameters_MSSM_SLHA2.h:26:2: error: #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" 26 | #error This non-SM physics process only supports MGONGPU_HARDCODE_PARAM builds (madgraph5#439): please run "make HRDCOD=1" | ^~~~~ 'make HRDCOD=1' fails with ccache /cvmfs/sft.cern.ch/lcg/releases/gcc/12.1.0-57c96/x86_64-centos9/bin/g++ -O3 -std=c++17 -I. -fPIC -Wall -Wshadow -Wextra -ffast-math -fopenmp -march=skylake-avx512 -mprefer-vector-width=256 -DMGONGPU_FPTYPE_DOUBLE -DMGONGPU_FPTYPE2_DOUBLE -DMGONGPU_HARDCODE_PARAM -fPIC -c Parameters_MSSM_SLHA2.cc -o Parameters_MSSM_SLHA2.o In file included from Parameters_MSSM_SLHA2.cc:15: Parameters_MSSM_SLHA2.h:403:53: error: call to non-'constexpr' function 'mgOnGpu::cxsmpl<FP> mgOnGpu::conj(const cxsmpl<FP>&) [with FP = double]' 403 | constexpr cxsmpl<double> mdl_conjg__yu3x3 = conj( mdl_yu3x3 ); | ~~~~^~~~~~~~~~~~~
…ctors of floats madgraph5#439 - this is not necessary here, but it IS necessary for EFT (see later tests for heft_gg_h.sa)
… of floats madgraph5#439 - builds fail for FPTYPE=f, will revert In particular - HRDCOD=0 builds still always fail (in a way similar to SMEFT builds) - HRDCOD=1 FPTYPE=d,m builds succeed, with or without the special handling of vector floats - HRDCOD=1 FPTYPE=f builds succeed with the special handling of vector floats but fail without it
…cial handling for vectors of floats Revert "[susy] in heft_gg_h.sa, try to remove the special handling of vectors of floats madgraph5#439 - builds fail for FPTYPE=f, will revert" This reverts commit 8450ffb.
… of vectors of floats madgraph5#439 (which now applies also to SM processes)
…or the special handling of vectors of floats madgraph5#439 (which now applies also to SM processes)
Hi @oliviermattelaer @roiser I am finally about to merge the alphas PR #434.
As one last check, I wanted to check that heft models also work fine - they do not.
I have fixed the simple stuff (like _sm vs _heft suffixes), but there is one fundamental issue: the code to compute the dependent couplings depends itself on several parameters that I had not exported there. See below a full example.
The problem here is quite fundamental, in the sense that normally these parameters are available through the Parameters_sm instance, somehow available through the CPPProcess instanc (things that I think we should review anyway). Anyway, the computation is now part of a device function (host device inline const DependentCouplings_sv computeDependentCouplings_fromG) which can be in a kernel. If all those other parameters must be available in a kernel, one must copy them there to device memory, especially if they are read at runtime from a runcard file. The alternative is that they are taken from somewhere hardcoded in the code (where they are sent at generation time - this is what I did in the HARDCODED ifdef branch, so you see that in the full files I attach these are available.
My question is,
The latter solution would considerably simply things.
This is certainly not a priority, but I raise the issue here for reference. I will merge the alphas PR as is, without support for heft...
Thanks!
Andrea
PS Example from my autogenerated code
The text was updated successfully, but these errors were encountered: