Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

introduce mcache, move lwip allocations to mcache #69

Merged
merged 8 commits into from
Sep 7, 2018
Merged

Conversation

wjhun
Copy link
Contributor

@wjhun wjhun commented Sep 7, 2018

mcache - "multi" cache

This is essentially a wrapper heap for a set of caches of varying object sizes. Object sizes are specified on heap creation. Allocations are made from the cache of the smallest object size equal to or greater than the alloc size.

Specifying the object sizes in an array argument to allocate_mcache() is kind of clunky; I'm open to suggestions. Note that object sizes actually do not need to be a power-of-2.

lwip_heap is now an mcache. It might make sense to survey the allocation sizes used (either empirically or by studying lwip code) and arrange the object sizes accordingly. Also note that sizes need to be sorted in the array, but that shouldn't be a problem if we're declaring them statically.

The objcache code no longer allocates an initial page on cache creation. So there's not much of a cost to adding object sizes that are never used.

network_test runs as it did before.

Unless there's something else I'm not aware of, this should complete the caching / reuse of lwip-related objects.

@@ -0,0 +1,135 @@
/* multi-cache heap
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there something specific to the leaves that require them to be caches? or it is any set of heap where the objects fall into the same size bucket (2^n)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only requirements are that:

  • the leaf heap can be found by object reference
  • can free without size information (to satisfy lwip and anything else that has a malloc/free type interface)

So the only thing that really makes this specific to the objcache is the call to objcache_from_object(). I suppose this could be made into a call in the abstract heap type (heap_from_pointer()?), but I don't know what other heaps would support this.

Also, at this point, objcache (and thus mcache) can accept any size object, not just 2^n.

deallocate(m->meta, m, sizeof(struct mcache));
}

heap allocate_mcache(heap meta, heap parent, bytes * sizes)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how much it makes sense to pass in the array of sizes. I thought it would be useful if we had a sense of what object sizes are likely to be used, but that's a tunable that needs to be maintained.

I could change this to min_order / max_order...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

meh? i think there are maybe cleaner ways to set it up, but given that it happens once by definition...i guess a range of log2 probably would be simpler

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'll change to that because I don't like how it looks right now. If someone is dying to have tunable sizes, they can always add another allocate interface, kind of like the id heap.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes pushed

m->h.alloc = mcache_alloc;
m->h.dealloc = mcache_dealloc;
m->h.destroy = destroy_mcache;
m->h.pagesize = 0; /* XXX not clear what the meaning is here */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 is probably a better punt than 0...but i guess the min of the leaves would be a reasonable answer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I'll just set it to min size. It seems we only use pagesize when the heap is used as a parent / backing heap.

m->caches = allocate_vector(meta, 1);

/* XXX sort sizes */
for(int i=0; sizes[i] != -1ull; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if its at all helpful, but does this guy (size multiplexer) become more general if you pass a set of heaps and ranges from the outside?

absolutely n/m, everything here is contingent on the page metadata

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I could see one day doing some refactoring and making a very generic container heap that can handle heaps of different allocation sizes as well as heaps of different address ranges (or parent heaps). So the mcache and id heaps would be refactored to use it, I guess. It would require some additions to the abstract type, e.g. heap_from_object(). But I'm not sure what the practical upside would be, just prettier maybe.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, please ignore. i can image using it to multiplex between page sizes and such. but at the end of the day we really just need to dole out chunks and put them back

return INVALID_PHYSICAL;
}

void mcache_dealloc(heap h, u64 a, bytes b)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its worth revisiting whether or not deallocate should take the size. at some point I got tired of having to keep track of it behind the interface when in (almost) all cases, the caller has a really good idea.

i think that probably still true - but given that a large consumer that we don't want to mess with has normal malloc/free, maybe not the most practical

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only issue I can think of in this case is whether h->allocated should represent actual bytes allocated by the heap or bytes requested / freed by the caller. If the former is true, then I don't see what we'd do with a dealloc size other than a sanity check (which at this point is one-sided since we don't relate the found cache heap to the vector and thus don't know the next largest size, so we just make sure that it's not larger than the given cache size...it's all kind of pointless anyway if we can verify that the object is indeed owned by the leaf cache...)

@@ -217,17 +217,13 @@ static heap lwip_heap;

void *lwip_allocate(u64 size)
{
///xxxxxxx
return allocate_zero(lwip_heap, size+4);
return allocate_zero(lwip_heap, size);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm sure that was just some leftover flailing

m->h.alloc = mcache_alloc;
m->h.dealloc = mcache_dealloc;
m->h.destroy = destroy_mcache;
m->h.pagesize = 1 << min_order; /* default to smallest obj size */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

random note...i'm not sure having a non-integral order (i'm getting used to that, for some reason never used that term before). ever makes sense. so maybe pagesize should always be log2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would simplify things in some cases, though take away the possibility of caching exact sizes of objects when not a non-integral order (wow, it is fun to write!).

@wjhun wjhun merged commit 30c41aa into master Sep 7, 2018
@wjhun wjhun mentioned this pull request Sep 7, 2018
@wjhun wjhun deleted the memory-lwip branch September 12, 2018 14:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants