-
Notifications
You must be signed in to change notification settings - Fork 15.3k
Description
| Bugzilla Link | 3724 |
| Resolution | FIXED |
| Resolved on | Mar 09, 2009 16:34 |
| Version | trunk |
| OS | All |
| Attachments | Propsed patch to return largest block on new function allocation, Propsed patch to return largest block on new function allocation, Propsed patch to return largest block on new function allocation |
| Reporter | LLVM Bugzilla Contributor |
Extended Description
The JIT memory manager always returns the head of the FreeMemoryList in response to a block allocation request from JITEmitter (JITMemoryManager.cpp:303)
This can cause a "JIT: Ran out of space for generated machine code" error whenever there's any fragmentation in the MemoryManger RWX region, even if there's a huge amount of free space left.
Consider a multi-threaded application, where Thread A allocates a 100-byte block at the beginning of the RWX region, then Thread B allocates a 100-byte block directly after A's block. Thread A then frees its machine code block.
This would leave:
| <--100B Free--> | <--100B Alloc(Thread B)--> | <--15+MB Free Space--> |
^
Head of free list
Now, whenever anything else tries to allocate a JIT region, the Memory Manager will always return the first (empty) 100B block. If this isn't big enough, however, the current behavior of the JITEmitter is to abort with a "JIT: Ran out of space for generated machine code" error (JITEmitter.cpp:893). I know there's a //FIXME in there, but even when the JITEmitter requests another allocation, the JIT MemoryManager will still return the head block.
Proposed Fix: Since the allocation size is unknown at alloc time, the JIT MemoryManager should return the largest available block in the FreeMemoryList on any allocation. I have a patch locally that implements this behavior and corrects the problem. There is of course a performance penalty for this, but in my application, in practice, the free list never grows beyond a handful of entries despite being accessed asynchronously by many threads.