Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

atuin sync: thread 'main' panicked at 'failed to decrypt history! check your key: could not encrypt #1199

Closed
sheeeng opened this issue Aug 29, 2023 · 28 comments

Comments

@sheeeng
Copy link

sheeeng commented Aug 29, 2023

Experienced the following error when I run atuin sync.

~ atuin sync      
0/0 up/down to record store
thread 'main' panicked at 'failed to decrypt history! check your key: could not encrypt

I have double checked the following values.

  • cat ~/.local/share/atuin/session has the same value between two separate system.
  • echo $ATUIN_SESSION has different value between two separate system.

I have tried atuin logout and atuin login -u "${ATUIN_USERNAME}" -p "${ATUIN_PASSWORD}" -k "${ATUIN_KEY}" again on the same that has this issue, but to no avail.

How we can further troubleshooting this issue?

@LiamAEdwards
Copy link

I am having the same issue.

I have also double checked the following values.

cat ~/.local/share/atuin/session has the same value between two separate system.
echo $ATUIN_SESSION has different value between two separate system.

As well as logging out and manually logging in using the same details on my local machine (which works and is running Arch) and the remote machine.

I'm trying to run atuin on

20.04.6 LTS (Focal Fossa)
In AWS EC2 with full ingress and egress.

Here is a backtrace.

 RUST_BACKTRACE=full atuin sync
0/0 up/down to record store
thread 'main' panicked at 'failed to decrypt history! check your key: could not encrypt

Location:
    /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/atuin-client-16.0.0/src/encryption.rs:132:22', /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/atuin-client-16.0.0/src/sync.rs:74:38
stack backtrace:
   0:     0x55709be4e051 - std::backtrace_rs::backtrace::libunwind::trace::he648b5c8dd376705
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
   1:     0x55709be4e051 - std::backtrace_rs::backtrace::trace_unsynchronized::h5da3e203eef39e9f
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0x55709be4e051 - std::sys_common::backtrace::_print_fmt::h8d28d3f20588ae4c
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/sys_common/backtrace.rs:65:5
   3:     0x55709be4e051 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hd9a5b0c9c6b058c0
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/sys_common/backtrace.rs:44:22
   4:     0x55709be7d8df - core::fmt::rt::Argument::fmt::h0afc04119f252b53
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/fmt/rt.rs:138:9
   5:     0x55709be7d8df - core::fmt::write::h50b1b3e73851a6fe
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/fmt/mod.rs:1094:21
   6:     0x55709be49cf7 - std::io::Write::write_fmt::h184eaf275e4484f0
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/io/mod.rs:1714:15
   7:     0x55709be4de65 - std::sys_common::backtrace::_print::hf58c3a5a25090e71
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/sys_common/backtrace.rs:47:5
   8:     0x55709be4de65 - std::sys_common::backtrace::print::hb9cf0a7c7f077819
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/sys_common/backtrace.rs:34:9
   9:     0x55709be4f393 - std::panicking::default_hook::{{closure}}::h066adb2e3f3e2c07
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:269:22
  10:     0x55709be4f124 - std::panicking::default_hook::h277fa2776900ff14
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:288:9
  11:     0x55709be4f919 - std::panicking::rust_panic_with_hook::hceaf38da6d9db792
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:705:13
  12:     0x55709be4f817 - std::panicking::begin_panic_handler::{{closure}}::h2bce3ed2516af7df
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:597:13
  13:     0x55709be4e4b6 - std::sys_common::backtrace::__rust_end_short_backtrace::h090f3faf8f98a395
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/sys_common/backtrace.rs:151:18
  14:     0x55709be4f562 - rust_begin_unwind
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:593:5
  15:     0x55709b316ed3 - core::panicking::panic_fmt::h4ec8274704d163a3
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panicking.rs:67:14
  16:     0x55709b317373 - core::result::unwrap_failed::h170bc2721a6c6ff2
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/result.rs:1651:5
  17:     0x55709b47662d - <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold::h7e574e63c423e289
  18:     0x55709b403757 - <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter::h440602780d28d88c
  19:     0x55709b56d718 - atuin_client::sync::sync::{{closure}}::h1def9375b7bbbea7
  20:     0x55709b582a21 - atuin::command::client::sync::run::{{closure}}::h3d9c1c5bdea6084a
  21:     0x55709b589a97 - atuin::command::client::Cmd::run::{{closure}}::hf5764c6026b8791a
  22:     0x55709b3fe7cb - tokio::runtime::scheduler::current_thread::Context::enter::h9af1b47baf3dc831
  23:     0x55709b5547d9 - tokio::runtime::context::scoped::Scoped<T>::set::h78c78447aff5a999
  24:     0x55709b37e6c0 - tokio::runtime::context::set_scheduler::h2b57e0537fd4417c
  25:     0x55709b3fefdc - tokio::runtime::scheduler::current_thread::CoreGuard::block_on::hfc74648f80cf9271
  26:     0x55709b4ea2fc - tokio::runtime::context::runtime::enter_runtime::ha08fea4c576562d4
  27:     0x55709b527230 - tokio::runtime::runtime::Runtime::block_on::h454302e1b5caa17b
  28:     0x55709b55b77a - atuin::command::AtuinCmd::run::h263e208ab776cca4
  29:     0x55709b434128 - atuin::main::hcac5527dd26b44fe
  30:     0x55709b3bdff3 - std::sys_common::backtrace::__rust_begin_short_backtrace::h7b559fe6fab15ed3
  31:     0x55709b411e0d - std::rt::lang_start::{{closure}}::h04beb3247dab1d94
  32:     0x55709be4290b - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once::h75ba4244a1c7bb54
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:284:13
  33:     0x55709be4290b - std::panicking::try::do_call::h0a2baa36dea975a1
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
  34:     0x55709be4290b - std::panicking::try::h0e42aa233d4224d4
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
  35:     0x55709be4290b - std::panic::catch_unwind::hefdfd8f482606434
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
  36:     0x55709be4290b - std::rt::lang_start_internal::{{closure}}::h457959f0f91da23b
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/rt.rs:148:48
  37:     0x55709be4290b - std::panicking::try::do_call::h112cfd1acb38183b
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
  38:     0x55709be4290b - std::panicking::try::ha64f15b20cec18ca
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
  39:     0x55709be4290b - std::panic::catch_unwind::hbacc2b68ee2c119e
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
  40:     0x55709be4290b - std::rt::lang_start_internal::h5f408694586c2a05
                               at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/rt.rs:148:20
  41:     0x55709b436995 - main
  42:     0x7f59c8b90083 - __libc_start_main
  43:     0x55709b31767e - _start
  44:                0x0 - <unknown>

I am deploying atuin using Ansible so I can sync up multiple devices here - https://github.com/LiamAEdwards/Deploying-Atuin-Ansible

@StevenXL
Copy link

I am having this issue as well.

@Kamek437
Copy link

Me as well. Been this way for a pretty long time actually. My rust is really rusty so I dunno...Hopefully somebody can help here.

@alerque
Copy link
Contributor

alerque commented Oct 4, 2023

I've been hitting this persistently for months as well across several versions now. I've been hacking around in the code dumping ciphertexts and keys and trying to get more info to no avail.

Almost any pairing of 2 machines I've tried synchronize fine, any of several thirds I've tried to mix it fail with this sync error even after successfully authenticating and showing identical keys.

@LiamAEdwards
Copy link

Would love to see a fix for this as it's effectively stopped me using Atuin on more than one device.

@ellie
Copy link
Member

ellie commented Oct 8, 2023

So this isn't an error related to the session. That's just your auth with the sync server and nothing to do with encryption

Does the output of "atuin key" match across all machines? What's the output of "atuin status"?

If any machine has ever had a key with a typo or similar and uploaded invalid history, you'll need to recreate you account

Otherwise maybe try copying the ~/.local/share/atuin/key file to a new machine

Our key management currently relies on the user and has no server side checks. we're addressing it in a new version of sync, but that's a rather large change and taking a while

@LiamAEdwards
Copy link

Will get back to you on that, not currently at the devices I was using at the time but last time I checked atuin key was exactly the same, even had to copy and paste them across to compare to make sure.

If any machine has ever had a key with a typo or similar and uploaded invalid history, you'll need to recreate you account

By that do you mean recreate the account for the device that uploaded invalid history? What makes invalid history?

I'll try copying and taking a look again, it's been a while.

Thank you again for looking in at this :)

@domglusk
Copy link

Chiming in as well, tested via OpenSUSE TW repos, cargo install, and nixpkgs.atuin, same error, first it was the fact it wasn't in base64 and after creating that I put the key in a txt file in the same location on both computers, and still says can't encrypt when running atuin sync

@ellie
Copy link
Member

ellie commented Oct 10, 2023

Chiming in as well, tested via OpenSUSE TW repos, cargo install, and nixpkgs.atuin, same error, first it was the fact it wasn't in base64 and after creating that I put the key in a txt file in the same location on both computers, and still says can't encrypt when running atuin sync

What happens if you set your key by copying the output of "atuin key" from another machine and pasting it in when "atuin login" asks you to?

There are so many things that can go wrong if you're messing with the key file manually. Otherwise it would be ok to directly copy it across machines

@alerque
Copy link
Contributor

alerque commented Oct 10, 2023

What happens if you set your key by copying the output of "atuin key" from another machine and pasting it in when "atuin login" asks you to?

Nothing different, I've tried this several times including again just now. Whether the key comes from a copied file or pasting the output of atuin key the result is the same.

If any machine has ever had a key with a typo or similar and uploaded invalid history, you'll need to recreate you account

I've tried this too. I ran into the 3 system issue right away and immediately though must hove done something wrong, so I deleted my account from the server, nuked all local keys, sessions, and dbs and started again. I ran into the same problem again even though the second time through I was using two-out-of-three different systems. In the end I just started using it with 2 systems working and now I'm fairly invested in having a large history that I can use in predictable ways.

What's the output of "atuin status"?

From one of the working 2 machines it looks like this:

$ atuin status
[Local]
Sync frequency: 10m
Last sync: 2023-10-10 06:10:09.918222001 UTC
History count: 57565

[Remote]
Address: https://api.atuin.sh
Username: alerque
History count: 57641

From a machine failing to sync, it looks more like this:

$ atuin status
Sync frequency: 10m
Last sync: 1970-01-01 00:00:00 UTC
History count: 15618

[Remote]
Address: https://api.atuin.sh
Username: alerque
History count: 57641

Both systems have the same key/session, as can be verified with the checksums being identical:

$ sha256sum ~/.local/share/atuin/{key,session}
93e0bbd38ef66a9603c5da4b303b9b31c439c62f656c5141f1740004fec2cf98  /home/caleb/.local/share/atuin/key
b46e1c2c1b32506c06245346a0e84c6fffa30f393f0b9e948a4c0f3893c584b0  /home/caleb/.local/share/atuin/session

@conradludgate
Copy link
Collaborator

Very strange. Sorry for the trouble. I'll make a branch with some testing ideas later today, if you're OK with installing it and trying it out? I've had to debug this once before for myself - in my case I had messed up the encoding while testing and had a few entries that were not decodable with later code

We have a new implementation of sync on the way which should fix all of these problems for good

@alerque
Copy link
Contributor

alerque commented Oct 10, 2023

Sure. I can build and test from a branch with any debug info that interests you. I currently have access to both working and failing machines.

@domglusk
Copy link

Chiming in as well, tested via OpenSUSE TW repos, cargo install, and nixpkgs.atuin, same error, first it was the fact it wasn't in base64 and after creating that I put the key in a txt file in the same location on both computers, and still says can't encrypt when running atuin sync

What happens if you set your key by copying the output of "atuin key" from another machine and pasting it in when "atuin login" asks you to?

There are so many things that can go wrong if you're messing with the key file manually. Otherwise it would be ok to directly copy it across machines

That's what I've been doing, I've been sending it over with syncthing and moving it another location I've defined in the config, just in case there's any sync errors

@sheeeng
Copy link
Author

sheeeng commented Oct 11, 2023

Both systems have the same key/session, as can be verified with the checksums being identical:

$ sha256sum ~/.local/share/atuin/{key,session}
93e0bbd38ef66a9603c5da4b303b9b31c439c62f656c5141f1740004fec2cf98  /home/caleb/.local/share/atuin/key
b46e1c2c1b32506c06245346a0e84c6fffa30f393f0b9e948a4c0f3893c584b0  /home/caleb/.local/share/atuin/session

I experienced similar situation too.

~ $ system_profiler SPHardwareDataType | grep "Model Identifier"
      Model Identifier: MacBookPro16,1~ $ sha256sum ~/.local/share/atuin/{key,session}
ffd420ca24b08716089f4c971f543ca89066b34d03f4e390fd53813cb436e0ee  /Users/lssl/.local/share/atuin/key
caa8d00ba49fe131bf3cbf05b7f1b53da9b134e1528fc34ba00ab58e18d58e6c  /Users/lssl/.local/share/atuin/session~ $ atuin status
[Local]
Sync frequency: 10m
Last sync: 2023-10-11 20:34:24.971731 UTC
History count: 23774

[Remote]
Address: https://api.atuin.sh
Username: sheeeng
History count: 23909~ $ atuin sync
0/0 up/down to record store
Sync complete! 23775 items in history database, force: false~ $
~ hostname
eb840~ sha256sum ~/.local/share/atuin/{key,session}
ffd420ca24b08716089f4c971f543ca89066b34d03f4e390fd53813cb436e0ee  /home/leonard/.local/share/atuin/key
caa8d00ba49fe131bf3cbf05b7f1b53da9b134e1528fc34ba00ab58e18d58e6c  /home/leonard/.local/share/atuin/session~ atuin status
[Local]
Sync frequency: 10m
Last sync: 1970-01-01 00:00:00 UTC
History count: 4434

[Remote]
Address: https://api.atuin.sh/
Username: sheeeng
History count: 23909~ atuin sync  
0/0 up/down to record store
thread 'main' panicked at 'failed to decrypt history! check your key: could not encrypt

Location:
    /home/leonard/.cargo/registry/src/index.crates.io-6f17d22bba15001f/atuin-client-16.0.0/src/encryption.rs:132:22', /home/leonard/.cargo/registry/src/index.crates.io-6f17d22bba15001f/atuin-client-16.0.0/src/sync.rs:74:38
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace~

@saturated-white
Copy link

saturated-white commented Oct 13, 2023

I'm having the same issue: Same trace, checked/tried/verified pretty much everything like the others did, BUT then I discovered something that's maybe related (or maybe it's a completely separate issue):

I installed v16.0.0 on three computers and connected all three instances to a self-hosted Atuin server (ghcr.io/atuinsh/atuin:16.0.0), but when I checked the sessions table, I could only find one entry in that table. - Therefore, could it be that the atuin sync gets some HTTP/403 body instead, tries to decrypt that and (obviously) fails?

PS: On an atuin login I get a successful "Logged in!" response.

@ellie
Copy link
Member

ellie commented Oct 13, 2023

@saturated-white could you try running your server with RUST_LOG=debug and ATUIN_LOG=debug + share the logs with us here please?

It's not going to be an auth-related issue, but could potentially be related to a server side response

@saturated-white
Copy link

saturated-white commented Oct 14, 2023

Sure @ellie! - Due to its size I added it as attachment: atuin-server_PLAIN.log

Likely it's self explanatory, but just in case, I did following:

  1. Created a new container w/ your provided environment variables.
  2. Called atuin sync from a machine where it throws the error.
  3. Called atuin sync from the machine where it works.

UPDATE:

Since I did not spot anything suspicious in my server log file, I tried the following:

I checked the history table for synced computers and realized that the 3rd computer never created any history entry.
Afterward, I cleared all history entries from the 2nd computer, which "fixed" the sync for all computers.

But now I have a new problem: When a computer tries to upload a newly entered command to the server, the server receives:

2023-10-14T11:20:17.511125Z DEBUG hyper::proto::h1::io: parsed 6 headers
2023-10-14T11:20:17.511148Z DEBUG hyper::proto::h1::conn: incoming body is empty

Noteworthy side-effects:

  • Atuin correctly adds the entry to the local history.
  • After an atuin import auto + atuin sync, the new entry are listed on the other computers.

PS: I backed up the original history table, if anybody needs me to debug anything further on it.

@FlareFlo
Copy link

FlareFlo commented Nov 8, 2023

The order, in which one syncs their devices (for the first time) seemed to fix this issue. A total of three devices of mine were involved, one of which synced fine, two of which threw the same error.
Deleting my account, and reopening it on one of the two devices (which panicked before) seemingly fixed the issue, all 3 devices sync perfectly now.

@kimonoki
Copy link

The order, in which one syncs their devices (for the first time) seemed to fix this issue. A total of three devices of mine were involved, one of which synced fine, two of which threw the same error. Deleting my account, and reopening it on one of the two devices (which panicked before) seemingly fixed the issue, all 3 devices sync perfectly now.

I found it's the order problem too.

@ellie
Copy link
Member

ellie commented Dec 11, 2023

Have you got any examples of ordering that does/doesn't work? 🤔

Fwiw, I'm totally reworking sync in #1400, which will be opt-in for v18. It should be much, much less sensitive to this kind of thing 🙏

@ellie
Copy link
Member

ellie commented Feb 13, 2024

If anyone experiencing this issue could try the new sync, released as opt-in for v18, that would be great!

Requirements

  1. All clients will need to be running the same version (>=v18)
  2. Any servers will need to be running at least v18

Setup

  1. Add this to the bottom of your ~/.config/atuin/config.toml
[sync]
records = true
  1. Run atuin sync

  2. If prompted by (2), run atuin history init-store to import old data to the new sync

At any time, run atuin store status to see what’s going on with your stores.

You will need to repeat these steps for every machine running Atuin

Troubleshooting

If you run into any error about keys with the new sync, please run atuin store verify and report back.

More info: https://forum.atuin.sh/t/sync-v2-testing/124

@alerque
Copy link
Contributor

alerque commented Feb 13, 2024

I've just migrated most of my hosts. Migration for me is a bit complicated because for a long time now I've been coping with the inability to connect more that 2 hosts by cheating hard: I have 2 host ids that I clone between machines and all my systems were self-identifying as one of those two hosts (the closest match of server vs. workstation so the "HOST" filtering had some limited usefulness). This has been working to get quite a few machines in sync, but obviously to actually test if this is working I needed to untangle that. I've kept 1 system each from the two host ids and blown the rest away, then restored the key file only, then logged in and synced again. So far I have 4 hosts hooked up and migrated to the v2 records. I actually have 4 hosts identified in the store status now as well as all the old history. This is better than I've ever been able to accomplish before v18 + v2 records.

@alerque
Copy link
Contributor

alerque commented Feb 13, 2024

Ouch, I spoke too soon. I just ran into the error again connecting a 5th host having wiped out its previous local data. The login went fine, but the first sync threw a decrypt error again. Worse it has propagated to other hosts, the first sync from all the other 4 after that 5th host connected threw the error too. Different from my experiences with v17 however, subsequent syncs are not throwing the error. Verifying the store however does still panic:

$ atuin store verify
Verifying local store can be decrypted with the current key
Failed to verify local store encryption: attempting to decrypt with incorrect key. currently using k4.lid.GjxE3O4OsA9asnqMkJmYl8vxDlqYl9AjxQ_gPZOwqvx8, expecting k4.lid.Cuot1SH3Y0Nk1VwBklmcvm8qJ9q8LSNpjcVrFzEUWC3g

Location:
    atuin-client/src/record/encryption.rs:132:9

The 5th host that I initially tried to sync currently succeeds in syncing, but atuin hitsory init-store fails with the same error seen above on other hosts verifying the store.

@ellie
Copy link
Member

ellie commented Feb 13, 2024

So far I have 4 hosts hooked up and migrated to the v2 records. I actually have 4 hosts identified in the store status now as well as all the old history. This is better than I've ever been able to accomplish before v18 + v2 records.

Glad it got this far. Fwiw, the hostid used for the store does not have any relation to what's within the history itself. I've tried to decouple the two as much as possible to reduce brittleness.

Failed to verify local store encryption: attempting to decrypt with incorrect key.

Afaik this occurs with one of two things, seen by the kids shown

  1. At some point in the past, this machine had a different key to the others, so was writing history with the wrong key
  2. At present, this machine has the wrong key

Once you've verified that all keys on all machines are correct, you can resolve the error with the following operations. It'll make life easier if you disable auto_sync temporarily.

To be extra safe, make a copy of ~/.local/share/atuin.

  1. atuin store purge - this will delete all records in the store that cannot be decrypted with the current key
  2. atuin store verify - verify that the previous operation was successful
  3. atuin store push --force - this will delete all records stored remotely, and then push up local data. Run this on the machine that has been purged
  4. atuin store pull --force - this does the opposite to (3). Delete all local data in the store, and pull from the remote
  5. atuin store rebuild history - ensure your history.db is up to date after all these operations

The idea in the recovery there is to correct the store on one machine, and then ensure that all others match it. While it does delete data, it only deletes data that cannot be decrypted.

@alerque
Copy link
Contributor

alerque commented Feb 13, 2024

Thanks for the walk through on the purge process. That right there seems to be the missing magic to recover from bad scenarios that v1 sync never had.

I now have 5 machines all connected with the purged record based history and their stores verify.

Somewhat to my confusion atuin store status is now showing 6 hosts: 1 of them of course not being the "current host" anywhere. Guessing from timestamps I guessing this was from a failed attempt with the 3rd machine in my nuke-and-pave I did to get rid of cloned hosts earlier. Is there a way to purge entries marked with a specific host just to keep things tidy since it only has a couple useless entries and won't be coming back? No great harm in staying I guess it's just a bit confusing to identify.

In any event at least for my problematic cases there is now a way out to a clean usage with the record based store. 💯

@ellie
Copy link
Member

ellie commented Feb 13, 2024

No worries! Happy to hear that sorted it for you.

1 of them of course not being the "current host" anywhere.

There's a ~/.local/share/atuin/host_id file that tracks the ID, so you cleaning up some cloned hosts might have caused it. There's not currently a way to clean it up, but I am planning on making the output of the status command a bit neater so it is less likely to be confusing.

@saturated-white
Copy link

Thank you @ellie! - The new opt-in syncing algorithm in v18 solved the issue for me.

@ellie
Copy link
Member

ellie commented Feb 16, 2024

Great! Glad it worked for you. Seems like this fixed it for a bunch of people, so I'll close this issue

@ellie ellie closed this as completed Feb 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests