Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix VM Address mask #8440

Merged
merged 5 commits into from Jan 11, 2022
Merged

Fix VM Address mask #8440

merged 5 commits into from Jan 11, 2022

Conversation

mhightower83
Copy link
Contributor

Adjust VM Address mask to allow up to 8M Byte parts.

Allow for free heap size values over 64K
ESP.getHeapStats(, uint32_t,)
uint32_t getMaxFreeBlockSize();

Adjust VM Address mask to allow up to 8M Byte parts.
Allow for free heap size values over 64K
  ESP.getHeapStats(, uint32_t,)
  uint32_t getMaxFreeBlockSize();
Copy link
Collaborator

@mcspr mcspr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, since values are invalid otherwise
but, we still end up ambiguous with typing of the variables - size_t umm_max_block_size_core() which is declared as size_t aka unsigned long and not a fixed size uint32_t. not sure why umm decided on size_t here, nothing in it's internals prevents 64bit ints for example?
(edit) ...do we want size_t instead as the output, since it is a platform specific value?

@mhightower83
Copy link
Contributor Author

I often find myself uncertain of these selections. After a quick search, "size_t must be big enough to store the size of any possible object." Also noted was that sizeof() returns a value of type size_t.

Copy link
Collaborator

@earlephilhower earlephilhower left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great fix, @mhightower83 ! I'm surprised anyone but me has tried this out. I was so stoked at first, but with the new ESP chips with larger RAM I never really did anything w/the VM.

One thing I might caution, though, is that you can expect more WDT errors if you do go to 8MB simply because the UMM now has over 100x the space to process and if you hit fragmentation or something it can wnd up following some loooong chains to find allocation spaces...
-edit-Just remembered that the UMM blocksize needs to change or it can't physically access 8MB, anyway, due to overflowing the bits available in it's header. So, ignore prior comment.

@earlephilhower
Copy link
Collaborator

CI failure is simple fix to host test MockESP to match the new prototype you've exported.

@mhightower83
Copy link
Contributor Author

One thing I might caution, though, is that you can expect more WDT errors if you do go to 8MB simply because the UMM now has over 100x the space to process and if you hit fragmentation or something it can wnd up following some loooong chains to find allocation spaces...

@earlephilhower - Yes I ran into some issues starting with memset in umm_init_common (I think) when handling more than 2MB. And, a few other places. The upstream now has a global UMM_BLOCK_BODY_SIZE define. I was looking at adding a compile option for per heap body block size. I have an unfinished version that works; however, it is unclear to me, that it would get any use. And as with most conditional build features, it adds more build permutation to keep working. Plus, makes reviews more burdensome. As well as future integrations with upstream.

All of this is to ask. If you see value in adding this I will finish it off as a PR later. First, I need to finish a PR that completes folding in the new upstream version of umm_malloc?

@mcspr
Copy link
Collaborator

mcspr commented Jan 7, 2022

I often find myself uncertain of these selections. After a quick search, "size_t must be big enough to store the size of any possible object." Also noted was that sizeof() returns a value of type size_t.

Would be consistent with the umm itself, where it is preferred to use size_t (or generic uint instead of something fixed size). POSIX seems to prefer platform-specific size_t, things like freebsd or esp-idf / freertos as well. idk how much this would change for this, though, it might as well touch much more things than just the stats

Updated boards.txt to show correct External memory size and Heap size.
@mhightower83
Copy link
Contributor Author

I am inclined to keep the uint32_t types. It is of the same precision as size_t for this platform. And, it may be what is expected.
After a quick peek at ESP32, it used uint32_t for the return values of similar heap functions in its Esp.cpp. Even though some functions have different names, keeping the types the same may help with making libraries shareable between platforms.

@mcspr
Copy link
Collaborator

mcspr commented Jan 10, 2022

I am inclined to keep the uint32_t types. It is of the same precision as size_t for this platform. And, it may be what is expected. After a quick peek at ESP32, it used uint32_t for the return values of similar heap functions in its Esp.cpp. Even though some functions have different names, keeping the types the same may help with making libraries shareable between platforms.

True, but note that getFreeHeap() will still use the idf function returning size_t
Where it may have been that way for historical reasons, and ours was also u32 at that time b/c of using the system_get_free_heap_size() and not the umm one directly
The idea is to fix the intent of the type as well (even if we never reach the limits for either)

It is done feature wise though, next one is fixing umm block cfg?
(...for which, also see the idf implementation for 'capabilities', or perhaps we can tweak the umm to use template parameters since we do build it with c++)

@mhightower83
Copy link
Contributor Author

... The idea is to fix the intent of the type as well (even if we never reach the limits for either)

okay

(...for which, also see the idf implementation for 'capabilities', ...

Hmm, that would help portability between platforms. I am concerned about IRAM impact. Increase use of IRAM and code space, in general, is a complaint I have seen with each new release. I try to keep umm_malloc IRAM creep down as much as possible. However, it only takes up space if it is called.

Side note, It is possible to enable/disable icache around umm_init(). Looks like for the non-multi-heap case that could free 172 bytes of IRAM while adding 208 bytes to IROM; however, it leaves us with a net increase in code space used of 36 bytes. I'll leave that for a future discussion/experimental PR.

... or perhaps we can tweak the umm to use template parameters since we do build it with c++)

I don't know how that would work. The upstream code is ".c". Before I started maintaining I believe the extension was changed to gain better error checking from the compiler. I am concerned about how it would affect updating with the upstream changes.

I think one obstacle is the heap needs to be up before the SDK is started. (Previously it started on the first malloc from the SDK early in its init code. umm_malloc was peppered with checks in each API to do umm_init.) While the C and C++ CRT takes place later by way of system_init_done_cb

void init_done() {
system_set_os_print(1);
gdb_init();
std::set_terminate(__unhandled_exception_cpp);
do_global_ctors();
esp_schedule();
ESP.setDramHeap();
}

I guess we could handle Internal RAM, DRAM, with low-level C code. Then, we could have some kind of C++ wrapper for the other (IRAM and External RAM). At the moment I don't see a value add.

@mcspr mcspr merged commit 9fcf14f into esp8266:master Jan 11, 2022
@mcspr
Copy link
Collaborator

mcspr commented Jan 11, 2022

I don't know how that would work. The upstream code is ".c". Before I started maintaining I believe the extension was changed to gain better error checking from the compiler. I am concerned about how it would affect updating with the upstream changes.
I think one obstacle is the heap needs to be up before the SDK is started. (Previously it started on the first malloc from the SDK early in its init code. umm_malloc was peppered with checks in each API to do umm_init.) While the C and C++ CRT takes place later by way of system_init_done_cb

Yes, but we don't necessarily need anything initializing through c++ objects, just utilize the fact that we can generate types and do certain overloads at compile time. We are effectively creating heap contexts at runtime, but it may be done statically up until we really need runtime data like section pointers in case of iram. We do have a pre-defined set of heap types as well, so this may also refer to them directly instead of another runtime search and runtime 'stacked' heaps (e.g. a tuple of the available contexts, or I wonder if this could be a vtables use-case)

How that would really be done is another question though, I am just speculating. And dealing with c++ vs. c runtime, I'd also look at the way allocators are used throughout the stdlib
(like malloc_allocator.h as a minimal example)

@mhightower83 mhightower83 deleted the pr-vm-fix branch January 12, 2022 19:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants