Upsert operations are intended to be run concurrently, as per the needs of the application. As such, it’s possible that two concurrently running operations could try to add the same node at the same time. For example, both try to add a user with the same email address. If they do, then one of the transactions will fail with an error indicating that the transaction was aborted.
However, multiple concurrent upsert transactions for the same key may succeed, resulting in the creation of multiple, rather than one, records for that key. For instance, in 20180216T075315.000-0600.zip, four upserts complete on the same key, resulting in entities 0x01, 0x02, 0x04, and 0x05:
This is easy to reproduce on a fresh cluster with no failures. You can demonstrate this failure mode with Jepsen 07abf364c2364c710f08a6f49392851a94f83c76, on 1.0.3, with lein run test -w upsert.
The text was updated successfully, but these errors were encountered:
Yeah, looks like the dev build you gave me earlier passes, with the upsert directive. I've also expanded the upsert test to try hundreds of keys (instead of one), and with several hours of inserts with and without partitions, I can't reproduce the conflict any more. :)
The howto says
However, multiple concurrent upsert transactions for the same key may succeed, resulting in the creation of multiple, rather than one, records for that key. For instance, in 20180216T075315.000-0600.zip, four upserts complete on the same key, resulting in entities 0x01, 0x02, 0x04, and 0x05:
This is easy to reproduce on a fresh cluster with no failures. You can demonstrate this failure mode with Jepsen 07abf364c2364c710f08a6f49392851a94f83c76, on 1.0.3, with
lein run test -w upsert
.The text was updated successfully, but these errors were encountered: