fix(deps): update dependency aiida-core to >=2.7.2,<=2.7.2#88
Merged
harryswift01 merged 1 commit intomainfrom Jan 22, 2026
Merged
fix(deps): update dependency aiida-core to >=2.7.2,<=2.7.2#88harryswift01 merged 1 commit intomainfrom
harryswift01 merged 1 commit intomainfrom
Conversation
Collaborator
Pull Request Test Coverage Report for Build 21256210253Details
💛 - Coveralls |
harryswift01
approved these changes
Jan 22, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
>=2.4.0,<=2.6.3→>=2.7.2,<=2.7.2Release Notes
aiidateam/aiida-core (aiida-core)
v2.7.2Compare Source
This patch release comes with a number of important bug fixes to the CLI, storage, archive, and transport modules.
Most notably, SQLAlchemy operational errors (affecting archive operations and the QB) and various race conditions (affecting mainly the transport plugins) have been resolved.
We strongly recommend upgrading from prior 2.7.x versions.
Fixes
CLI
verdi code listfor codes without computer (#7081) [27b52da2f]verdi code showoutput (#7073) [32742c0e0]dumpoperations (#7046) [fc00f5dec]-noption for archive import (#7044) [3a7d440e9]-poption fordumpendpoints (#7043) [019172c2d]verdi node showin dump README files (#6971) [5e4da5b4d]Storage
smarter_indocstring (#7146) [cc0bb483d]INclause to avoid parameter limits (#6998) [8d562b44e]add_nodeswith PSQL backend (#6991) [9bccdc816]Archive
UnboundLocalErrorinZipfileBackendRepository(#7129) [166d06c25]sqlite_zipprofile deletion despite original aiida archive file missing (#6929) [274ce6717]Engine
JobsList(#7061) [e79f0a44c]Transport
Configuration
Devops
v2.7.1Compare Source
Fixes
Docs
containsandget_creation_statistics(#6930) [a392f5c5cc2babddb2c5152989db7009bb53b87d]v2.7.0Compare Source
Asynchronous SSH connection (#6626)
Previously, when data transfer with a remote computer was active, the responsible transport plugins blocked further program execution until the communication was completed.
This long-standing limitation presented a potential opportunity for performance improvements.
With the introduction of the new asynchronous SSH transport plugin (
core.ssh_async), multiple communications with a remote machine can now happen concurrently.As an added benefit, for the configuration of a
Computerwith thecore.ssh_asynctransport plugin, it is not necessary anymore to manually provide all SSH connection details during the execution ofverdi computer core.ssh_async configure.Instead, one only needs to provide the
hostnameas it is given in the~/.ssh/configfile, and the transport plugin then uses OpenSSH of the OS to automatically configure theComputer, using the system configuration.🚀 When
core.ssh_asyncoutperformscore.sshcore.ssh_asyncoffers significant performance gains in scenarios where the worker is blocked by heavy transfer tasks, such as uploading, downloading, or copying large files.Example: Submitting two WorkGraphs/WorkChains with the following logic:
WorkGraph 1 – Heavy I/O operations
WorkGraph 2 – Lightweight task
touch fileMeasured time until the second WorkGraph is processed (single worker):
core.ssh_async: Only 4 seconds! 🚀🚀🚀🚀 A dramatic improvement!core.ssh: 108 seconds (the second task waits for the first to finish)⚖️ When
core.ssh_asyncandcore.sshperform similarlyFor mixed workloads involving numerous uploads and downloads—a common real-world use case—the performance gains depend on the specific conditions.
Large file Transfers (~1 GB):
core.ssh_asynctypically outperforms due to concurrent upload and download streams.In favorable network conditions, this can nearly double the effective bandwidth.
Test case:
Two WorkGraphs: one uploads 1 GB, the other retrieves 1 GB using
RemoteData.core.ssh_async: 120 secondscore.ssh: 204 secondsSmall file transfers (many small files):
Here, the overhead of managing asynchronous operations can outweigh the benefits.
Test case:
25 WorkGraphs, each transferring several ~1 MB files.
core.ssh_async: 105 secondscore.ssh: 65 secondsTo conclude, the choice of which transport plugin is the best bet for your use case depends on your specific application:
use
core.ssh_asyncfor workloads involving large file transfers or when you need to prevent I/O operations from blocking other tasks, but stick withcore.sshfor scenarios dominated by many small file transfers where the asynchronous overhead may reduce performance.Extended dumping support for profiles and groups (#6723)
In version
v2.6.0, AiiDA introduced the ability to dump processes from the database into a human-readable, structured folder format.Building on this feature, support has now been extended to allow dumping of entire groups and profiles, enabling users to retrieve AiiDA data more easily.
This enhancement is part of our broader roadmap to improve AiiDA's usability—especially for new users—who may find it challenging to construct the appropriate queries to extract data from the database manually.
The functionality is accessible via the
verdiCLI:Since dumping an entire profile can be a resource- and I/O-intensive operation (for large profiles), significant effort has been made to provide flexible options for fine-tuning which nodes are included in the dump.
To avoid initiating dumping operations on large profiles, if no filters (e.g., groups, codes, computers, node
mtime, etc.) are set, by default, no data is being dumped.If all data of a profile should be dumped, this must be actively requested using the
--alloption, or one must explicitly select the subset of data to be included in the dump via the provided filter options.Below is a snippet from the command's help output:
Another key feature is the incremental nature of the command, which ensures that the dumping process synchronizes the output folder with the internal state of AiiDA's DB by gradually adding or removing files on successive executions of the command.
This allows for efficient updates without having to overwrite everything, and is in contrast to AiiDA archive creation, which is a one-shot process.
The behavior can further be adjusted using:
--dry-run(-n): to simulate the dump without writing any files.--overwrite(-o): to fully overwrite the target directory if it already exists.Finally, the command provides various options to customize the output folder structure, for instance, to reflect the group hierarchy of AiiDA's internal DB state, symlink duplicate calculations (e.g., which are contained in multiple groups), create dedicated directories for sub-workflows and calculations of top-level workflows, and more.
These enhancements aim to make data export from AiiDA more robust, customizable, and user-friendly.
Stashing (#6746, #6772)
With this feature, you can bundle your data to a (compressed) tar archive during stashing by specifying one of the
stash_modeoptions"tar","tar.bz2","tar.gz", or"tar.xz".When specifying the stashing operation during the setup of your calculation, compression can be configured as follows:
In addition, it was historically only possible to enable stashing when it was instructed before running a generic
CalcJob.This means that the instruction had to be "attached" to the original
CalcJobbefore its execcution.However, if a user would realize they need to stash something only after running the calculation, this would not be possible.
With
v2.7.0, we introduce the newStashCalculationCalcJobwhich is able to perform a stashing operation after a calculation has finished—provenance included!The usage is very similar, and for consistency and user-friendliness, we keep the instructions as part of the metadata.
The only main input is the
remote_folderoutput node (an instance ofRemoteData) of the calculation source node to be stashed, for example:Forcefully killing processes (#6793)
Prior to version
v2.7.0, theverdi process killcommand could hang if a connection to the remote computer could not be established.A new
--forceoption has been introduced to terminate a process without waiting for a response from the remote machine.Note: Using
--forcemay result in orphaned jobs on the remote system if the remote job cancellation fails.We also now cancel the old killing action if it is resend by the user.
This allows the user to adapt the parameters for the exponential backoff mechanism (EBM) applied by AiiDA in the
verdi configand then resend the kill command with the new parameters.Furthermore, the
timeoutandwaitoptions were not behaving correctly, so they are now fixed and both merged into the singletimeoutoption.By passing
--timeout 0it replicates the--no-waitfunctionality, meaning the command does not block until the action has finished, and by passing--timeout inf(default option, replicating--waitwithout atimeout), the command blocks until a response.For more information see issue #6524.
Serialization of ORM nodes (#6723)
AiiDA's Python API provides an object relational mapper (ORM) that abstracts the various entities that can be stored inside the provenance graph (via the SQL database) and the relationships between them.
In most use cases, users use this ORM directly in Python to construct new instances of entities and retrieve existing ones, in order to get access to their data and manipulate it.
A shortcoming of the current ORM is that it is not possible to programmatically introspect the schema of each entity: that is to say, what data each entity stores.
This makes it difficult for external applications to provide interfaces to create and or retrieve entity instances.
It also makes it difficult to take the data outside of the Python environment since the data would have to be serialized.
However, without a well-defined schema, doing this without an ad-hoc solution is practically impossible.
With the implementation of a
pydanticModelfor each Entity we now allow external applications to programmatically determine the schema of all AiiDA ORM entities and automatically (de)serialize entity instances to and from other data formats, e.g., JSON.An example how this is done for an AiiDA integer node:
For an extensive overview of the implications see AEP 010.
Miscellaneous
aiida-coreis now compatible with Python 3.13 #6600core.ssh_asyncfor multiplexing #6795RemoteDataextended by member functionget_size_on_disk#6584SinglefileDataextended by constructorfrom_bytes#6653verdi group delete#6556verdi storage maintainshows a progress bar #6562compress&extract#6743get_creation_statistics#6763contains#6619Full list of changes for users
Features
verdi node graph generate(#6443) [6d2edc919]output_filein computer and code export commands (#6486) [9355a9878]verdi storage version(#6551) [ad1a431f3]from_bytesclassmethodtoorm.SinglefileData(#6653) [0f0b88a39]get_size_on_diskmethod toRemoteData(#6584) [02cbe0ceb]containsFilter Operator for SQLite (#6619) [aa0aa262a]AsyncTransportplugin (#6626) [eba6954bf]Transport: featcompress&extractmethods (#6743) [f4c55f5f7]CLI: add option--clean-workdirtoverdi node delete(#6756) [c53592850]RemoteStashCompressedDatanew data class, and it's deployment toexecmanager.pyto support compressed file formats while stashing. [ae49af6c3]get_creation_statisticsfor SQLite backend (#6763) [83454c713]StashCalculation: a newCalcJobplugin (#6772) [bc253236d]verdi computer deletewith other prompts [71fc14f3c2a501ff8d704d20df76a297edc8e8bc]Transport: Add OpenSSH as backend option toAsyncSshTransport(#6795) [c7fdf1cfaf50b555698ef039d55b7f295a71644a]Fixes
NotImplementedErrorinverdi calcjob gotocomputer(#6525) [120c8ac6d]QueryBuilder: Fix type bugs for PostgreSQL backend (#6658) [53be73730]Transport: Bug fix inrenamemethod (#6735) [f56fcc31c]sqlite_zipadd a filter totar.extractallmethod to be compatible with python 3.12 (#6770) [b95fd2189]puttreemethod for nested folders [834b2942e]local_copy_listbehavior for nested target folder [9fe8d5090]RemoteData(#6845) [936185b7f]RemoteDataandRemoteStashCompressedData(#6844) [2fd4b8931d4abd7799f6c17aec21612a7f85d1b3]process_killactions reschedules the cancelation of scheduler job (#6870) [e768b70383ee605feeafd05862a17b8481447880]filter_sizefor importing logs, user, computer (#6889) [ee87c790c998e06a9ce3b7f2f7a49be433bc49c0]--force-killoption to--forcein kill action (#6908) [5167e2cbc93dfe1d54f8fa6dd56a53248feb21ae]waitandtimeoutto one CLI option forverdi process {kill|play|pause}(#6902) [ec9d53e9cbb270ef45df1d1ef74bddaa4f94c029]aiida_instancefixture [a4edcf2a2099186b3eec6833a48ba1ded7afced7]Transport: bug fix on glob (#6917) [2d8dd7a1c4821c3a01589195c5ed0abac79099b0]Documentation
coreplugins (#6654) [baf8d7c3e]StashCalculationexplanation to RTD (#6861) [85a84fc73c16151ba5320c7a9c7dfafc8c7572ab]verdi process {kill|pause|play}(#6909) [d79137d2e730e479a9bcf1453cd19d7bf31f479b]Full list of changes for developers
Source code
SshAutoTransporttransport plugin (#6154) [71422eb87]kiwipyandplumpy[d86017f42]Manager: CatchTimeoutErrorwhen closing communicator [e91371573]SqliteDosStorage: Make the migrator compatible with SQLite (#6429) [6196dcd3b]psycopg~=3.0(#6362) [cba6e7c75]Scheduler: Refactor interface to make it more generic (#6043) [954cbdd3e].post0qualifier to version attribute [16b8fe4a0]paramiko~=3.0(#6559) [c52ec6758]_prepare_yamlmethod toAbstractCode(#6565) [98ffc331d]sqliteC-language (#6567) [d0c9572c8]verdi devel launch-multiply-add[b7c82867b]aiida_profile_cleantotest_devel.py[2ed19dcd8]Transport&Engine: factor outgetcwd()&chdir()for compatibility with upcoming async transport (#6594) [6f5c35ed1]orm.Listthepopandindexmethods (#6635) [36eab7797]load_profileand methods in aiida.init importable from aiida module (#6609) [ec52f4ef3]apparent-sizeinducommand ofget_size_on_disk(#6702) [2da3f9600]skip_ormas the default implementation forSqlaGroup.add_nodesandSqlaGroup.remove_nodes(#6720) [d2fbf214a]assert_neverto assert certain part of the code is never reached. [d7c382a24]SshAutoTransporttransport plugin (#6154)" (#6852) [cf2614fa2]test_execmanagerandAsyncSshTransport(#6855) [474e0fabcb4e2331253e10c1fbcd8ba6a323f6d2]defer_buildforEntity.ModelandSealable.Model(#6867) [cf07e9f9db63a3e484b9df99b70510bee7b17ff7]verdi quicksetupthe--db-engineoption (#6906) [8dd094873cefcf0fc342edc676a7689a2300d080]src/aiida/manage/testsmodule (#6903) [5a9c9e8c0e899c7b2027cf15178eb0fdecd6ae2e]machine_or_hosttohostand simplify prompts (#6914) [bf34c953326123f011585ead16a635e761943436]Tests
make_awareusing fixed date [9fe7672f3]which[d3e9333f5]aiida_profile_factory(#6893) [43176cba3e39f4d04ef3529cd50d3e368e1d78aa]Devops
test_leak_ssh_calcjobas nightly (#6521) [a5da4eda1]aiida.orm.utils.remote(#6503) [2bdcb7f00]sealedprocess nodes (#6591) [70572380b]verdi code testforInstalledCode(#6597) [8350df0cb]peter-evans/create-pull-request(#6576) [dd866ce81]Noneprocess states in build_call_graph (#6590) [f74adb94c]sphinx.configurationkey to RTD conf (#6700) [8440416a5]Configuration
📅 Schedule: Branch creation - Between 08:00 AM and 08:59 AM, Monday through Friday ( * 8 * * 1-5 ) (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.