Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add choice of Two-Level Segregated Fit and Linked List First Fit #78

Merged
merged 5 commits into from Dec 14, 2023

Conversation

trueb2
Copy link
Contributor

@trueb2 trueb2 commented Oct 29, 2023

This adds a feature choice of 'tlsf' or 'llff' to choose the global heap allocator implementation provided either by the current linked_list_allocator or rlsf.

This allows a user the crate to directly compare the two implementations by flipping the feature flag as long as the linked list free and used interface are not used.

@trueb2
Copy link
Contributor Author

trueb2 commented Oct 29, 2023

I did some tests on a nRF52840 dev kit. The linked list first fit was better most of the time, but there were some cases where TLSF was better. Most of the time linked list allocator was better in this simple scenario.

TLSF and Linked List were very similar overall, and TSLF was only better when allocating nearly all the memory. They both OOM'd with repeat(7) on the 175KiB heap.

TSLF
7.864685 INFO  Allocations: Avg 5429 Min 5401 Max 5798)
└─ allocator::____embassy_main_task::{async_fn#0} @ src/bin/allocator.rs:734 
7.864776 INFO  Frees: Avg 2415 Min 2380 Max 2593)
└─ allocator::____embassy_main_task::{async_fn#0} @ src/bin/allocator.rs:740 
8.650054 INFO  Allocations: Avg 5427 Min 5401 Max 5798)
└─ allocator::____embassy_main_task::{async_fn#0} @ src/bin/allocator.rs:734 
8.650146 INFO  Frees: Avg 2420 Min 2380 Max 2593)
└─ allocator::____embassy_main_task::{async_fn#0} @ src/bin/allocator.rs:740 


LLFF
56.462280 INFO  Allocations: Avg 5621 Min 5584 Max 6011)
└─ allocator::____embassy_main_task::{async_fn#0} @ src/bin/allocator.rs:734 
56.462371 INFO  Frees: Avg 2920 Min 2868 Max 3234)
└─ allocator::____embassy_main_task::{async_fn#0} @ src/bin/allocator.rs:740 
57.317413 INFO  Allocations: Avg 5624 Min 5584 Max 6011)
└─ allocator::____embassy_main_task::{async_fn#0} @ src/bin/allocator.rs:734 
57.317535 INFO  Frees: Avg 2920 Min 2868 Max 3143)
└─ allocator::____embassy_main_task::{async_fn#0} @ src/bin/allocator.rs:740 
    let fake_random_numbers: [u8; 100] = [
        42, 17, 88, 33, 125, 5, 60, 72, 200, 94, 11, 77, 8, 39, 2, 50, 21, 99, 3, 150, 64, 19, 55, 7, 31, 240, 59, 12,
        101, 78, 23, 44, 61, 9, 37, 255, 70, 28, 6, 121, 14, 85, 16, 68, 1, 91, 45, 35, 22, 109, 27, 47, 73, 4, 29,
        200, 51, 13, 102, 8, 54, 36, 66, 10, 20, 76, 200, 90, 25, 69, 56, 30, 87, 18, 80, 49, 132, 63, 74, 24, 37, 57,
        98, 15, 89, 53, 32, 123, 67, 38, 58, 82, 111, 58, 48, 122, 81, 26, 43, 34,
    ];
    let mut allocated = VecDeque::new();
    let mut alloc_durations = Vec::new();
    let mut free_durations = Vec::new();
    loop {
        // Allocate randomly roughly between 4 and 1024 bytes
        for _ in 0..100 {
            let instant = Instant::now();
            for x in fake_random_numbers.repeat(6) {
                allocated.push_back(VecDeque::from(vec![x; x as usize * 4]));
            }
            alloc_durations.push(instant.elapsed().as_micros() as u32);
            let instant = Instant::now();
            allocated.drain(..allocated.len() - 10);
            free_durations.push(instant.elapsed().as_micros() as u32);
        }

        // Report stats
        info!("Allocations: Avg {:?} Min {:?} Max {:?})",
            alloc_durations.iter().sum::<u32>() / alloc_durations.len() as u32,
            alloc_durations.iter().min().unwrap(),
            alloc_durations.iter().max().unwrap(),
        );
        alloc_durations.clear();
        info!("Frees: Avg {:?} Min {:?} Max {:?})",
            free_durations.iter().sum::<u32>() / free_durations.len() as u32,
            free_durations.iter().min().unwrap(),
            free_durations.iter().max().unwrap(),
        );
        free_durations.clear();
    }

@trueb2
Copy link
Contributor Author

trueb2 commented Oct 29, 2023

Fixes #36

Copy link
Member

@newAM newAM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm generally not a fan of mutually exclusive features when they can be avoided.

I think this could be implemented without the use of mutually exclusive features by implementing each type of allocator as a separate struct, instead of one struct that conditionally implements either?

@trueb2
Copy link
Contributor Author

trueb2 commented Oct 30, 2023

I think the choice of algorithm behind the global heap is a mutually exclusive decision. With all the changes since I opened this PR, the use of the crate may be changing.

The implementation of the Heap struct could be separated to allow multiple heap types within the same crate. Which one implementation would be the global Heap would be ambiguous.

@trueb2
Copy link
Contributor Author

trueb2 commented Oct 31, 2023

All the conflicts are resolved again

@trueb2
Copy link
Contributor Author

trueb2 commented Dec 13, 2023

In deployment, I have found TLSF to be superior in latency and power consumption on the nRF52840 on all workloads.(wearable sensing + BLE platforms). Broadly, I see ~2-3% improvements, especially evident after days of runtime.

Copy link
Member

@newAM newAM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all your work here!

Sorry for the slow review, work got very crazy recently.

@newAM newAM added this pull request to the merge queue Dec 14, 2023
Merged via the queue into rust-embedded:master with commit b52091a Dec 14, 2023
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants