Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removing multiple keys for MapM with >= 1024 entries panics #4

Closed
kskalski opened this issue Dec 23, 2022 · 5 comments
Closed

Removing multiple keys for MapM with >= 1024 entries panics #4

kskalski opened this issue Dec 23, 2022 · 5 comments

Comments

@kskalski
Copy link

I found a scenario of initializing map content and updating it (just removals) that causes panic, the repo scenario is here:
https://github.com/kskalski/snippets/blob/master/immutable_chunklist_bug/src/main.rs

roughly the code like that:

fn main() {
    let initial: Vec<(u64, _)> = vec![(695931000000, 344), ...];
    let mut update = vec! [(1933094000000, 0), ...];
    println!("init {:?}, update {:?}", initial.len(), update.len());
    let mut map = immutable_chunkmap::map::MapM::from_iter(initial.into_iter());
    map.update_many(update, |_, _, _| {
            None
    });
}

ends up with

     Running `target/debug/immutable_chunklist_bug`
init 1024, update 16
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /home/nazgul/.cargo/registry/src/github.com-1ecc6299db9ec823/immutable-chunkmap-1.0.4/src/avl.rs:672:53
stack backtrace:
   0: rust_begin_unwind
             at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/panicking.rs:584:5
   1: core::panicking::panic_fmt
             at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/panicking.rs:142:14
   2: core::panicking::panic
             at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/panicking.rs:48:5
   3: core::option::Option<T>::unwrap
             at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/option.rs:775:21
   4: immutable_chunkmap::avl::Tree<K,V,_>::create
             at /home/nazgul/.cargo/registry/src/github.com-1ecc6299db9ec823/immutable-chunkmap-1.0.4/src/avl.rs:672:34
   5: immutable_chunkmap::avl::Tree<K,V,_>::update_chunk
             at /home/nazgul/.cargo/registry/src/github.com-1ecc6299db9ec823/immutable-chunkmap-1.0.4/src/avl.rs:792:17
   6: immutable_chunkmap::avl::Tree<K,V,_>::update_chunk
             at /home/nazgul/.cargo/registry/src/github.com-1ecc6299db9ec823/immutable-chunkmap-1.0.4/src/avl.rs:826:33
   7: immutable_chunkmap::avl::Tree<K,V,_>::update_chunk
             at /home/nazgul/.cargo/registry/src/github.com-1ecc6299db9ec823/immutable-chunkmap-1.0.4/src/avl.rs:826:33
   8: immutable_chunkmap::avl::Tree<K,V,_>::update_many
             at /home/nazgul/.cargo/registry/src/github.com-1ecc6299db9ec823/immutable-chunkmap-1.0.4/src/avl.rs:861:17
   9: immutable_chunkmap::map::Map<K,V,_>::update_many
             at /home/nazgul/.cargo/registry/src/github.com-1ecc6299db9ec823/immutable-chunkmap-1.0.4/src/map.rs:376:13
  10: immutable_chunklist_bug::main
             at ./src/main.rs:6:5
  11: core::ops::function::FnOnce::call_once
             at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/ops/function.rs:248:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

feeding it with any shorter initial set or update list or modifying the keys widely seem to no longer cause the panic.

@estokes
Copy link
Owner

estokes commented Dec 23, 2022

Thank you very much for taking the time to report this and provide a test case. I think I have found the bug, and I am working on adding a test to the automated test suite to check for this case. As soon as I'm satisfied that I've actually found and fixed the bug I will release a new version.

@estokes
Copy link
Owner

estokes commented Dec 23, 2022

None of the keys in your update actually exist in the map, and that is why the bug was triggered, the model check never considered that case 🤦 .

  • the bug is fixed
  • the tests are updated to consider the case of update_many called with a disjoint key set
  • a new crate is pushed to crates.io

My sincere apologies for this bug.

@kskalski
Copy link
Author

kskalski commented Dec 24, 2022

Thanks, it works now.

FYI, as a side-note, it turned out that in my use-case (avg number of elements in map ~6000 and average update - sorted sequence of removals and modifications of ~22 elements) update_many is actually slower (25% slowdown of my total workload) than just doing a series of remove_cow and insert_cow... Probably it's because those updates are mostly done on the unique version of the map (I do periodically spin off a clone, but then it's discarded before such larger update), so copy on write never actually needs to do copies. Given that I just wonder what would something like update_many_cow achieve. ;-)

@estokes
Copy link
Owner

estokes commented Dec 24, 2022

It's unlikely it would be much faster. update_cow uses mutation, and for your case there isn't going to be anything faster than that. update_many can really fly when it's able to replace entire chunks at a time, but if elements in chunks are being updated then mutation will always be faster.

BTreeMap is pretty much the lower bound of performance for this kind of data structure (it's very very well optimized), and update_cow is within 2 or 3x of it, so there isn't much more performance to get. I wish I could grind it down such that it's as fast as BTreeMap for every operation, but there is some overhead involved in making it immutable, and so I'm pretty happy with 2-3x for COW operations.

@estokes
Copy link
Owner

estokes commented Dec 24, 2022

I should also mention that if update/insert performance matters to you more than lookup performance then you should try a smaller chunk size. MapS is a good place to start, however you can try any chunk size you like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants