Skip to content

Commit

Permalink
libpager: add 'terminating' flag and update shutdown code
Browse files Browse the repository at this point in the history
  • Loading branch information
BrentBaccala committed Dec 12, 2017
1 parent eed2313 commit 5a5ffcc
Show file tree
Hide file tree
Showing 3 changed files with 67 additions and 30 deletions.
29 changes: 23 additions & 6 deletions libpager/NOTES
Original file line number Diff line number Diff line change
Expand Up @@ -507,7 +507,7 @@ m_o_data_unlock: (kernel requesting write access when it's already got read acce
lock_object: (several types: flush, sync, return, response to an unlock request)
NEW MODE NEEDED: asynchronous, but reply requested
wrapper around m_o_data_lock message
if sync requested, add to lock_requests list with locks_pending++ and threads_waiting++
if sync requested, add to lock_requests list and increment count
after m_o_lock_request message sent, wait for locks_pending and pending_writes to drop to zero,
then decrement threads_waiting and remove lock_requests if zero

Expand Down Expand Up @@ -590,7 +590,7 @@ service_first_WRITEWAIT_entry:
( this could be triggered by a rapid succession of pager_sync() calls on a busy page and a slow disk )
unlock the pager
call pager_write_page() on all pages flagged PAGEOUT
set ERROR on any error return (discard actual error code)
set ERROR on any error return
relock the pager
for each page we just wrote:
search WRITEWAIT list for matching pages
Expand Down Expand Up @@ -640,10 +640,27 @@ m_o_terminate:
pager_shutdown:
set terminating flag
send pager_return (flush, return all, sync) to all active clients
in the meanwhile, we reject any new data requests with ENODEV
send m_o_destroy (error = ENODEV) to all active clients
we expect m_o_terminate replies from the kernels
call ports_destroy_right, which will blow away our receive right
in the meantime:
ignore all m_o_data_request and m_o_unlock requests
m_o_lock_completed messages: don't do "internal" processing, but do notify threads
process m_o_data_return messages, but don't generate any lock requests or data supply messages as a result
[ ] handle new m_o_init_object messages by adding them to client list, but not sending m_o_ready
that way, they'll get m_o_destroy in response
[X] ignore m_o_init_object messages
[ "De-allocating the abstract memory object port also has this effect" - Mach Kernel Principles p. 43 ]
[ so... we don't have to do this section ]
[ send m_o_destroy (error = ENODEV) to all active clients
[ now handle new m_o_init_object messages by sending m_o_destroy in response and adding to client list
[ wait for m_o_terminate replies from all clients
call ports_destroy_right, which will blow away our receive right, and ultimately call the destructor,
which will then call drop_client on any remaining clients

SHUTDOWN CODE

Old libpager would flush everything (no special flags set), then
destroy the receive right and ignore any final messages. This created
a minor race condition where clients could request a page while the
flush was processing, which would then get dropped.


MISTAKES AND OMISSIONS IN CURRENT INFO DOCUMENTATION:
Expand Down
63 changes: 40 additions & 23 deletions libpager/pagemap.cc
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,8 @@ void pager::internal_flush_request(memory_object_control_t client, vm_offset_t O

void pager::service_WAITLIST(vm_offset_t offset, vm_offset_t data, bool allow_write_access, bool deallocate)
{
if (terminating) return;

auto first_client = tmp_pagemap_entry.first_WAITLIST_client();

// std::cerr << "service_WAITLIST " << tmp_pagemap_entry;
Expand Down Expand Up @@ -167,6 +169,8 @@ void pager::data_request(memory_object_control_t MEMORY_CONTROL, vm_offset_t OFF
assert(LENGTH == page_size);
assert(OFFSET % page_size == 0);

if (terminating) return;

// fprintf(stderr, "data_request(MEMORY_CONTROL=%d, OFFSET=%d) page_size=%d\n", MEMORY_CONTROL, OFFSET, page_size);
// fprintf(stderr, "PAGINGOUT = %d\n", pagemap[OFFSET / page_size]->get_PAGINGOUT());

Expand Down Expand Up @@ -262,6 +266,8 @@ void pager::object_init (mach_port_t control, mach_port_t name, vm_size_t pagesi

std::unique_lock<std::mutex> pager_lock(lock);

if (terminating) return;

clients.insert(control);

mach_port_t old;
Expand All @@ -274,10 +280,12 @@ void pager::object_init (mach_port_t control, mach_port_t name, vm_size_t pagesi
// fprintf(stderr, "object_init exiting\n");
}

void pager::drop_client (mach_port_t control, const char * const reason)
{
std::unique_lock<std::mutex> pager_lock(lock);
// drop_client() - call with pager lock held

void pager::drop_client (mach_port_t control, const char * const reason,
std::unique_lock<std::mutex> & pager_lock)

{
// One of my test cases transmitted a data_request, then called
// object_terminate before receiving the data_supply in reply, so we
// had both outstanding pages and outstanding messages. Doubt this
Expand Down Expand Up @@ -384,7 +392,9 @@ void pager::drop_client (mach_port_t control, const char * const reason)

void pager::object_terminate (mach_port_t control, mach_port_t name)
{
drop_client(control, "object_terminate");
std::unique_lock<std::mutex> pager_lock(lock);

drop_client(control, "object_terminate", pager_lock);

#if 0
// check to see if any messages are outstanding on control port
Expand Down Expand Up @@ -421,23 +431,16 @@ void pager::object_terminate (mach_port_t control, mach_port_t name)

void pager::shutdown (void)
{
// XXX set terminating flag

/* Fetch and flush all pages */
pager_return (this, 1);

// XXX in the meanwhile, we reject any new data requests with ENODEV

std::unique_lock<std::mutex> pager_lock(lock);
terminating = true;

// XXX send m_o_destroy (error = ENODEV) to all open clients
// XXX we expect m_o_terminate replies from the kernels
/* Fetch and flush all pages
*
* pager_return() will also wait for all writes to finish.
*/

// wait for replies to any outstanding locks
pager_return (this, 1);

while (! outstanding_locks.empty()) {
outstanding_locks.begin()->second.waiting_threads.wait(pager_lock);
}
ports_destroy_right(this);
}

void pager::lock_object(vm_offset_t OFFSET, vm_size_t LENGTH, int RETURN, bool FLUSH, bool sync)
Expand Down Expand Up @@ -576,6 +579,8 @@ void pager::internal_lock_completed(memory_object_control_t MEMORY_CONTROL,
assert(LENGTH % page_size == 0);
assert(OFFSET % page_size == 0);

if (terminating) return;

vm_size_t npages = LENGTH / page_size;

uint8_t * operation = (uint8_t *) alloca(npages);
Expand Down Expand Up @@ -682,6 +687,8 @@ void pager::data_unlock(memory_object_control_t MEMORY_CONTROL, vm_offset_t OFFS
assert(LENGTH % page_size == 0);
assert(OFFSET % page_size == 0);

if (terminating) return;

vm_size_t npages = LENGTH / page_size;

bool * do_unlock = (bool *) alloca(npages * sizeof(bool));
Expand Down Expand Up @@ -882,7 +889,8 @@ void pager::data_return(memory_object_control_t MEMORY_CONTROL, vm_offset_t OFFS
} else if ((tmp_pagemap_entry.ACCESSLIST_num_clients() == 1)
&& ! tmp_pagemap_entry.get_WRITE_ACCESS_GRANTED()
&& ! tmp_pagemap_entry.is_WAITLIST_empty()
&& tmp_pagemap_entry.is_client_on_ACCESSLIST(tmp_pagemap_entry.first_WAITLIST_client().client)) {
&& tmp_pagemap_entry.is_client_on_ACCESSLIST(tmp_pagemap_entry.first_WAITLIST_client().client)
&& ! terminating) {
do_unlock[i] = true;
any_unlocks_or_notifies_required = true;
} else if (tmp_pagemap_entry.is_ACCESSLIST_empty() && tmp_pagemap_entry.is_WAITLIST_empty() && ! DIRTY) {
Expand Down Expand Up @@ -955,11 +963,15 @@ kern_return_t pager::get_error(vm_offset_t OFFSET)

void pager::dead_name(mach_port_t DEADNAME)
{
drop_client(DEADNAME, "dead name");
std::unique_lock<std::mutex> pager_lock(lock);

drop_client(DEADNAME, "dead name", pager_lock);
}

pager::~pager()
{
std::unique_lock<std::mutex> pager_lock(lock);

if (! clients.empty()) {
// This can happen if a rogue client calls m_o_init, then
// terminates without calling m_o_terminate. We'll get both a NO
Expand All @@ -968,13 +980,18 @@ pager::~pager()
// SENDERS gets processed first, and there are no other clients,
// we'll end up here.

// It can also happen after a pager_shutdown() with outstanding
// clients.

for (auto cl: clients) {
drop_client(cl, "~pager");
drop_client(cl, "~pager", pager_lock);
}
}

assert(WRITEWAIT.empty());
assert(NEXTERROR.empty());
if (! WRITEWAIT.empty()) {
WRITEWAIT.back().waiting_threads.wait(pager_lock);
}

assert(outstanding_locks.empty());
assert(outstanding_change_requests.empty());

Expand Down
5 changes: 4 additions & 1 deletion libpager/pagemap.h
Original file line number Diff line number Diff line change
Expand Up @@ -446,6 +446,8 @@ struct pager {

const bool & PRECIOUS = notify_on_evict;

bool terminating = false;

std::mutex lock;

struct user_pager_info * upi;
Expand Down Expand Up @@ -521,7 +523,8 @@ struct pager {
return port.port_right;
}

void drop_client(mach_port_t control, const char * const reason);
void drop_client(mach_port_t control, const char * const reason,
std::unique_lock<std::mutex> & pager_lock);

void internal_flush_request(memory_object_control_t client, vm_offset_t OFFSET);

Expand Down

0 comments on commit 5a5ffcc

Please sign in to comment.