-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nix-build yields SQLiteError due to failed constraint when inserting into NARs #3898
Comments
I marked this as stale due to inactivity. → More info |
I still see this at times on Nix 2.7 |
just got this with Nix 2.10.3, wat do? |
I didn't see this recently after switching to multi-user. The time when I did see this I also had a work-around and that was to set a different |
in parallel (3 concurrent jobs on one builder and 10 on the other) and, I guess, multi-user? (as in, the client and the builders all run NixOS). how do you change the execution environment? |
I also experienced this error just now. Restarting my CI pipeline worked. |
Just hit this too,
|
I have a suspicion that this is about the To hopefully confirm this theory, I'm improving (#7473) the sqlite error message to include the detailed message. This should tell us which constraint is violated. If my theory holds up, a PR to replace |
I've been hitting this as well and reproduced it with your much improved sqlite error message.
|
As another data point, we see this very frequently, nix 2.10, multiuser nix-daemon install, and up to 2 |
Caught it with the improved error message. No surprises here, so I'll make a PR to work around the apparently broken new info:
Full message:
|
@roberth nice work! 👏🏾 any idea or estimate for when a fix will be released? I just encountered this right now. |
And is there a workaround that can be easily applied temporarily? |
Strangely enough I ran the command that uses nix ( |
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days.
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days.
This should be fixed by Based on the code I've changed, the problem should have occurred at most once per (week, machine or user, cache) combination. |
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days.
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days.
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days.
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days.
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days.
Fix foreign key error inserting into NARs #3898
(cherry picked from commit 2ceece3)
Fixes #3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days. (cherry picked from commit fb94d5c)
(cherry picked from commit 2ceece3)
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days. (cherry picked from commit fb94d5c)
(cherry picked from commit 2ceece3)
Fixes NixOS#3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days. (cherry picked from commit fb94d5c)
Fixes #3898 The entire `BinaryCaches` row used to get replaced after it became stale according to the `timestamp` column. In a concurrent scenario, this leads to foreign key conflicts as different instances of the in-process `state.caches` cache now differ, with the consequence that the older process still tries to use the `id` number of the old record. Furthermore, this phenomenon appears to have caused the cache for actual narinfos to be erased about every week, while the default ttl for narinfos was supposed to be 30 days. (cherry picked from commit fb94d5c) (cherry picked from commit 8449b3c)
Describe the bug
Running on a CI pipeline with two parallel nix-build breaks with SQLiteError sporadically when using nix-2.4pre7805_984e5213.
(/nix/store/i188a271sni33gqxs0nwrkgnpljfhwwf-nix-2.4pre7805_984e5213.drv)
I see errors such as this:
I have not seen this before switching to nixUnstable.
Steps To Reproduce
Unfortunately I cannot reliably reproduce it, it happens a few times every day.
Expected behavior
No SQLite insert statement errors.
nix-env --version
outputnix-env (Nix) 2.4pre7805_984e5213
The text was updated successfully, but these errors were encountered: