Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Nexus: Quantum Package interface #1093
The purpose of this PR is to create a Nexus interface to Quantum Package. This interface will enable routine selected CI calculations with QMCPACK.
Saved for later PR when hdf5 route fully available
The following is working example for an H2O molecule:
Contents of H2O.xyz:
I can run this on Cooley or Theta at ALCF as these are the two machines I have access to with a QM Package install.
The Quantum Package configuration file needs to be sourced prior to running the script:
The Nexus script performs steps equivalent to the following to create the input for Quantum Package and run the HF in a split node configuration:
Quantum Package's approach to input is somewhat unique. The "input file" is actually a directory tree with each input variable cached in a separate ASCII file.
This directory/file structure also represents the internal state of the program, and so subsequent runs, e.g. to perform CI singles or selected CI, require updating input variables stored in these files. Following a run, some of these files/variables change value.
Below is the result of running "tree" on an ezfio input directory:
The nesting corresponds roughly to what Fortran namelists looks like, and so in practice the fundamental input structure is not more complicated than e.g. Quantum Espresso's (but working with it is).
The Nexus spec corresponding to this allowed input structure, including types, is found in quantum_package_input.py assigned to the variable "input_specification".
The actual variables (the terminal files in the tree above) are actually unique names, so in Nexus one is allowed to generate the input by specifying nothing more than a single list of input variable names and values supplied to "generate_quantum_package". Nexus' internal represenation of the input follows the directory tree however, with directories being labeled as "Section"'s and each section containing the keyword/value pairs.
Input writing is done by first calling "qp_create_ezfio_from_xyz" and "qp_edit -c" if the ezfio directory does not exist, then if/once it exists all variables are written to the corresponding files.
In practice, the Nexus inputs contain the delta's for the input required at each step, so for HF, for example, the Nexus input file is (print qp.input):
The named sections "ao_basis", "determinants", "electrons", "integrals_bielec", and "integrals_monoelec" correspond directly to ezfio directories and variables. Additionally, "structure" is present to represent the internal state of the ezfio file regarding the atomic structure and "run_control" is added to represent the action-style inputs required to run the code.
Input generation and input representation and read/write are handled by generate_quantum_package_input() and QuantumPackageInput, repectively, in quantum_package_input.py. Any action required to update the input state (calling qp_create_ezfio/qp_edit/qp_set_frozen_core.py and writing variables to files) is handled by QuantumPackageInput.
Actions relating to major simulation functions, like SCF/CIS/FCI (running "qp_run ...") are handled by the QuantumPackage class in quantum_package.py.
The need to run commands in a split fashion (master/slave) when using multiple nodes
or more traditionally when run on a single node
required extensions to Nexus' job framework represented by the Job class in machines.py. This is handled primarily by the "split_nodes" function in the Job class. This function is used to create unified or split application run commands in the "app_command" function of the QuantumPackage class. Additional extensions are also present there to run CIS in a loop to iteratively generate natural orbitals to form a better starting point for selected CI.
No tests existed for Machine and Job classes prior to this point and the extensions required for Job splitting presented the possibility that some functionality would not work as expected, so I added tests for Machine and Job classes (including an extensive check performed by the "check_job_idempotency" function previously resident in machines.py) to the ntest script. Tests there now include an explicit check that simple job launch commands (including for various node and thread counts) are created correctly for all currently supported machines. Tests were then added for the new job splitting facility.