Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libroach,cli,server: add thread stack dump facility #45321

Merged
merged 1 commit into from
Mar 9, 2020

Conversation

petermattis
Copy link
Collaborator

@petermattis petermattis commented Feb 24, 2020

Add a facility for dumping the stack traces for all threads in the
process under linux. The technique used was adapted from
github.com/thoughtspot/threadstacks. The list of threads is retrieved by
scanning /proc/self/tasks. A realtime signal is sent to each thread
using rt_tgsigqueueinfo. A custom signal handler for that signal uses
the glibc backtrace facility to retrieve the thread's
stack. Communication between the coordinating thread and the signalled
thread is performed using a pipe (most other synchronization primitives
are not safe to use from a signal handler).

Hook up /debug/threads endpoint and add a link on the debug page of
the admin UI. Extend /_status/stacks to allow the optional retrieval
of thread stacks (vs the default of goroutine stacks). Enhance debug zip to retrieve the thread stacks for each node.

Release note (admin ui change): Improve debuggability of C++-level
issues by providing access to thread stack traces via a new
/debug/threads endpoint which is exposed on the Admin UI advanced
debug page. Include thread stack traces in the info collected by debug zip. Thread stack traces are currently only available on Linux.

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@petermattis petermattis requested a review from a team February 25, 2020 15:39
@petermattis petermattis requested a review from a team as a code owner February 25, 2020 15:39
@petermattis petermattis changed the title [WIP] libroach: add thread stack dump facility libroach,cli,server: add thread stack dump facility Feb 25, 2020
@petermattis
Copy link
Collaborator Author

This is now ready for a real review.

@knz I'm hoping you can review the debug zip changes. If you have knowledge of signals and threads in C++ you can also take a look at libroach/stack_trace.cc.

@tbg I'm nominating you to look at libroach/stack_trace.cc. If you know someone who would be better, let me know.

@dhartunian Please take a look at the change to pkg/ui/src/views/reports/containers/debug/index.tsx. Should be simple enough. I just cargo-culted.

@petermattis
Copy link
Collaborator Author

It would be nice to backport this to 19.2, though we have to balance that against the risk that this new endpoint could crash a node, making debug zip more dangerous and less useful. Probably should let this bake on master for a little bit.

Copy link
Collaborator

@dhartunian dhartunian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pkg/ui/src/views/reports/containers/debug/index.tsx LGTM 👍

@petermattis petermattis force-pushed the pmattis/thread-stacks branch 3 times, most recently from 0e4a1de to 31c13a7 Compare February 26, 2020 20:31
Copy link
Member

@tbg tbg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely 🐶 on the details but the overall approach in stack_trace.cc looks good. I can believe how this all works in the happy case, but was curious how we handle threads that don't respond on time or even block the signal. With our luck, these cases will end up mattering...

for (auto tid : tids) {
const uint64_t blocked = BlockedSignals(tid);
if ((blocked & (1ULL << kStackTraceSignal)) != 0) {
// The thread is blocking receipt of our signal, so don't bother
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this mean (i.e. when would a thread not accept our signal) and does that mean we're just omitting it? Is that ok?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each thread has a mask which specifies which signals they can receive. See pthread_sigmask. The Go runtime happens to create a thread which blocks all signals except for ones that the application is listening on (via signal.Notify). See https://github.com/golang/go/blob/master/src/runtime/signal_unix.go#L818. If you're wondering how I know this, then you have some sense of what it took to whip this PR into shape.

Yes, I think it is ok to omit threads which are blocking our signal. These won't be the RocksDB threads or normal Go threads running goroutines. Regardless, there isn't much we can do with the technique being used here.

continue;
}
if (ret == 0) {
// We timed out before reading all of the stacks.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we returning partial info in this case? Are we marking it as partial?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Marking it as partial is a good idea. Let me do that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. See the code below where we indicate (no response) for a thread which didn't return a stack.

Copy link
Collaborator Author

@petermattis petermattis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TFTR!

@knz I suspect you might have some knowledge of these areas. If you don't, feel free to ignore an I'll rely on @tbg's science dog stamp.

for (auto tid : tids) {
const uint64_t blocked = BlockedSignals(tid);
if ((blocked & (1ULL << kStackTraceSignal)) != 0) {
// The thread is blocking receipt of our signal, so don't bother
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each thread has a mask which specifies which signals they can receive. See pthread_sigmask. The Go runtime happens to create a thread which blocks all signals except for ones that the application is listening on (via signal.Notify). See https://github.com/golang/go/blob/master/src/runtime/signal_unix.go#L818. If you're wondering how I know this, then you have some sense of what it took to whip this PR into shape.

Yes, I think it is ok to omit threads which are blocking our signal. These won't be the RocksDB threads or normal Go threads running goroutines. Regardless, there isn't much we can do with the technique being used here.

continue;
}
if (ret == 0) {
// We timed out before reading all of the stacks.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Marking it as partial is a good idea. Let me do that.

continue;
}
if (ret == 0) {
// We timed out before reading all of the stacks.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. See the code below where we indicate (no response) for a thread which didn't return a stack.

Copy link
Contributor

@knz knz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple high-level comments:

  • I wonder what happens when the internal kernel pipe buffer is full and the program is running with only 1 core, if my reading is right you'd get a deadlock in that case: the sig handler tries to write, but the reader is not scheduled at that time.

  • signals are delivered synchronously. My design would be to use a global volatile variable to hold the stack. Then iterate the following 3 steps:

    1. send the sig to 1 thread
    2. that thread gets the sig delivered, dumps its backtrace to the global var, sig handler completes
    3. main thread gets control back, picks up the state from the global
    4. start back to 1 for next thread

It's slower but it would avoid the uncertainty of polling on a pipe. The timeout makes me uncomfortable (on a loaded system, where we're more likely to want stacks,, it's also more likely to reach this timeout).

  • I wonder why you use backtrace but refrain from using backtrace_symbols to get the translation immediately and locally. This would increase the ability of users to self-serve.

  • I recall we still support a non-glibc build on linux (musl?). This may need to be tested and the build flags adjusted accordingly.

  • I would have the thing return an error if nothing can be read from /proc/self/tasks, or if one of the syscalls fails, or when the backtrace call fails, instead of silently reporting "no threads". Also different errors for every case so we can troubleshoot if it ever happens. This would also clarify for tech support when a particular OS/lib combination should be marked as incompletely supported.

Reviewed 9 of 14 files at r1, 5 of 5 files at r2.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @petermattis and @tbg)


c-deps/libroach/stack_trace.cc, line 67 at r2 (raw file):

std::vector<pid_t> ListThreads() {
  std::vector<pid_t> pids;
  DIR* dir = opendir("/proc/self/task");

I notice you have an EINTR loop below. Is it worth wrapping the various syscalls, including opendir/readdir/closedir here, with an EINTR retry loop?


c-deps/libroach/stack_trace.cc, line 83 at r2 (raw file):

      continue;
    }
    pids.push_back(stoll(child));

stoll would throw an exception if the conversion fails.
Do we have exception handling for c++ code somewhere? or is it going to crash the process?
Also if an exception is thrown, closedir is not called and the process leaks a file descriptor. I would enclose the entire opendir..closedir block in try..catch for this purpose.


c-deps/libroach/stack_trace.cc, line 91 at r2 (raw file):

uint64_t BlockedSignals(pid_t tid) {
  const std::string path = "/proc/" + std::to_string(tid) + "/status";
  int fd;

this way to read the file contents does not make me super comfortable: I don't se a strong reason why 1024 bytes are sufficient, and more would blow the stack perhaps?

What about the classic

std::string contents;
{
  std::ifstream ifs("/proc/.../status");
  contents = std::string( (std::istreambuf_iterator<char>(ifs) ),
                       (std::istreambuf_iterator<char>()    ) );
}

c-deps/libroach/stack_trace.cc, line 122 at r2 (raw file):

  }
  data = data.substr(pos + needle.size());
  return stoull(data, nullptr, 16);

ditto my comment about this throwing execptions


c-deps/libroach/stack_trace.cc, line 160 at r2 (raw file):

  // interrupted by the stacktrace collection signal.
  action.sa_flags = SA_ONSTACK | SA_RESTART | SA_SIGINFO;
  return sigaction(kStackTraceSignal, &action, nullptr) == 0;

if memory serves the idiom is a first sigaction to retrieve the current flags, then a second sigaction to override the particular flags of interest.
Then at the end, to clean up, a last sigaction to restore what was there the first time around.

The reason why this matters is that the Go runtime used to (maybe still is) particular about sa_mask and, on some platforms (I don't know about linux) sa_restorer.


c-deps/libroach/include/libroach.h, line 620 at r2 (raw file):

// atos on Darwin) to symbolize.
DBString DBDumpThreadStacks();
  

nit: two stray space characters

@knz
Copy link
Contributor

knz commented Mar 3, 2020

Sorry no the signal is not synchronous because it's a different thread. Obviously I was mistaken.

Still I don't think the pipe is mandatory, a global volatile var and a spinlock would also probably do.

@petermattis petermattis force-pushed the pmattis/thread-stacks branch 2 times, most recently from 7269945 to 6a5d680 Compare March 6, 2020 19:03
Copy link
Collaborator Author

@petermattis petermattis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TFTR, @knz!

I wonder what happens when the internal kernel pipe buffer is full and the program is running with only 1 core, if my reading is right you'd get a deadlock in that case: the sig handler tries to write, but the reader is not scheduled at that time.

Since linux 2.6.11, the pipe buffer defaults to 64KB. Prior to that, the pipe buffer was 4KB. I doubt we'd ever experience the pipe buffer becoming full. But even if that happens, I'm not convinced you'd get a deadlock regardless. Your second comment about the signals being delivered asynchronously might be recognition of this. Even though there is only 1 core, if you have multiple threads, when one thread writes to the pipe, another one would be reading.

signals are delivered synchronously. My design would be to use a global volatile variable to hold the stack. Then iterate the following 3 steps:

  1. send the sig to 1 thread
  2. that thread gets the sig delivered, dumps its backtrace to the global var, sig handler completes
  3. main thread gets control back, picks up the state from the global
  4. start back to 1 for next thread

The global variable approach avoids heap allocation of the ThreadStack structure, but what do we do if one of the threads doesn't respond (e.g. because the signal is blocked). With the heap allocation approach you can just leak the structure. With a global variable you can't.

It's slower but it would avoid the uncertainty of polling on a pipe. The timeout makes me uncomfortable (on a loaded system, where we're more likely to want stacks,, it's also more likely to reach this timeout).

FWIW, grabbing the stacks for 30 threads takes 2-10ms on my Mac laptop when running under Docker. We could increase the length of the timeout, but because we can't guarantee signal delivery we'll always need some timeout. Do you have a better suggestion than 5s?

I wonder why you use backtrace but refrain from using backtrace_symbols to get the translation immediately and locally. This would increase the ability of users to self-serve.

No strong reason. I saw some scary stuff about backtrace_symbols somewhere on the nets. Do you have experience with it? Also, it doesn't provide line number information was is irritating, but I suppose symbols are better than no-symbols. Done.

I recall we still support a non-glibc build on linux (musl?). This may need to be tested and the build flags adjusted accordingly.

Good catch. Do you recall why we support musl? I can't recall anyone ever using it, and supporting it without cause is irritating. I've added some additional #ifdefs and verified that build/builder.sh mkrelease linux-musl works.

I would have the thing return an error if nothing can be read from /proc/self/tasks, or if one of the syscalls fails, or when the backtrace call fails, instead of silently reporting "no threads". Also different errors for every case so we can troubleshoot if it ever happens. This would also clarify for tech support when a particular OS/lib combination should be marked as incompletely supported.

It looks like backtrace can't fail. The man page doesn't say anything about a failure return value. I've added some additional error handling, such as ListThreads return nothing.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @knz and @tbg)


c-deps/libroach/stack_trace.cc, line 67 at r2 (raw file):

Previously, knz (kena) wrote…

I notice you have an EINTR loop below. Is it worth wrapping the various syscalls, including opendir/readdir/closedir here, with an EINTR retry loop?

opendir/readdir/closedir are C library functions, not system calls. My internet searches don't reveal them being wrapped in EINTR retry loops. Do you know differently?


c-deps/libroach/stack_trace.cc, line 83 at r2 (raw file):

Previously, knz (kena) wrote…

stoll would throw an exception if the conversion fails.
Do we have exception handling for c++ code somewhere? or is it going to crash the process?
Also if an exception is thrown, closedir is not called and the process leaks a file descriptor. I would enclose the entire opendir..closedir block in try..catch for this purpose.

We haven't used C++ exceptions anywhere and they aren't used in RocksDB. I'd prefer not to introduce them here. I've switched to using strtoll and elsewhere strtoull.


c-deps/libroach/stack_trace.cc, line 91 at r2 (raw file):

Previously, knz (kena) wrote…

this way to read the file contents does not make me super comfortable: I don't se a strong reason why 1024 bytes are sufficient, and more would blow the stack perhaps?

What about the classic

std::string contents;
{
  std::ifstream ifs("/proc/.../status");
  contents = std::string( (std::istreambuf_iterator<char>(ifs) ),
                       (std::istreambuf_iterator<char>()    ) );
}

The loop reads the file no matter how large it is, it only does so in 1024 byte increments. The file is generally smaller than that. I've never been comfortable with the C++ IO idioms.


c-deps/libroach/stack_trace.cc, line 122 at r2 (raw file):

Previously, knz (kena) wrote…

ditto my comment about this throwing execptions

Switched to strtoull.


c-deps/libroach/stack_trace.cc, line 160 at r2 (raw file):

if memory serves the idiom is a first sigaction to retrieve the current flags, then a second sigaction to override the particular flags of interest.
Then at the end, to clean up, a last sigaction to restore what was there the first time around.

I've restructured this code so we restore the old signal handler on return.

The reason why this matters is that the Go runtime used to (maybe still is) particular about sa_mask and, on some platforms (I don't know about linux) sa_restorer.

Can you point me towards where this is (or was) being done in the Go runtime? When I looked around I didn't see anything that was problematic, but the signal handling is complex and I might have missed something. Note that sa_mask is empty here.

Copy link
Contributor

@knz knz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks about good to go.

Do you have a better suggestion than 5s?

No better suggestion, no. We can also just try your current approach out and see where that brings us during the release cycle.

backtrace_symbols ... Do you have experience with it?

Casual experience. It worked in a small experiment I once wrote. I don't know about real-world. I can imagine it could break if we ran strip on executables.

Do you recall why we support musl?

Yes, this is the only way we currently have to produce statically linked binaries on linux. We had early customers that wanted/asked for this.

I also have a few leftover comments see below.

Reviewed 10 of 11 files at r3.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @knz, @petermattis, and @tbg)


c-deps/libroach/db.cc, line 813 at r3 (raw file):

DBStatus DBGetStats(DBEngine* db, DBStatsResult* stats) {
  auto stacks = DumpThreadStacks();
  fprintf(stderr, "%s\n", stacks.c_str());

Did you intend to keep this?


c-deps/libroach/stack_trace.cc, line 67 at r2 (raw file):

Previously, petermattis (Peter Mattis) wrote…

opendir/readdir/closedir are C library functions, not system calls. My internet searches don't reveal them being wrapped in EINTR retry loops. Do you know differently?

I went and check at the source code of GNU libc and BSD libc and neither contains EINTR retry loops on the directory functions. They do use open/read/etc under the hood though. Obviously given these are not real files, the time window at which a signal could be received is very small, but the race condition is still theoretically there.


c-deps/libroach/stack_trace.cc, line 160 at r2 (raw file):

Previously, petermattis (Peter Mattis) wrote…

if memory serves the idiom is a first sigaction to retrieve the current flags, then a second sigaction to override the particular flags of interest.
Then at the end, to clean up, a last sigaction to restore what was there the first time around.

I've restructured this code so we restore the old signal handler on return.

The reason why this matters is that the Go runtime used to (maybe still is) particular about sa_mask and, on some platforms (I don't know about linux) sa_restorer.

Can you point me towards where this is (or was) being done in the Go runtime? When I looked around I didn't see anything that was problematic, but the signal handling is complex and I might have missed something. Note that sa_mask is empty here.

Here's the code in os_linux.go:

//go:nosplit
//go:nowritebarrierrec
func setsig(i uint32, fn uintptr) {
        var sa sigactiont
        sa.sa_flags = _SA_SIGINFO | _SA_ONSTACK | _SA_RESTORER | _SA_RESTART
        sigfillset(&sa.sa_mask)
        // Although Linux manpage says "sa_restorer element is obsolete and
        // should not be used". x86_64 kernel requires it. Only use it on
        // x86.
        if GOARCH == "386" || GOARCH == "amd64" {
                sa.sa_restorer = funcPC(sigreturn)
        }
...

So as you see 1) there's sa_restorer funkiness 2) sa_flags is different from yours.

Here's another example thing that was happening in sys_darwin.go:

//go:nosplit
//go:nowritebarrierrec
func setsig(i uint32, fn uintptr) {
        var sa sigactiont
        sa.sa_flags = _SA_SIGINFO | _SA_ONSTACK | _SA_RESTART
        sa.sa_mask = ^uint32(0)
        sa.sa_tramp = unsafe.Pointer(funcPC(sigtramp)) // runtime·sigtramp's job is to call into real handler
        *(*uintptr)(unsafe.Pointer(&sa.__sigaction_u)) = fn
        sigaction(i, &sa, nil)
}

This approach was actually catastrophic, because on Darwin it's not actually possible to retrieve the current value of sa_tramp with sigaction(), so the general idea of "get current settings, customize, then restore" does not work. This was biting us very hard in go-libedit.

I think this was changed in go 1.12/1.13 though for Darwin.


c-deps/libroach/stack_trace.cc, line 127 at r3 (raw file):

  const std::string needle("SigBlk:");
  size_t pos = data.find(needle);
  if (pos == data.npos) {

I would certainly recommend reporting an error in this case. This sounds like a serious condition which we'd need to hear about (even perhaps record it in telemetry in Go).

Add a facility for dumping the stack traces for all threads in the
process under linux. The technique used was adapted from
github.com/thoughtspot/threadstacks. The list of threads is retrieved by
scanning `/proc/self/tasks`. A realtime signal is sent to each thread
using `rt_tgsigqueueinfo`. A custom signal handler for that signal uses
the glibc `backtrace` facility to retrieve the thread's
stack. Communication between the coordinating thread and the signalled
thread is performed using a pipe (most other synchronization primitives
are not safe to use from a signal handler).

Hook up `/debug/threads` endpoint and add a link on the debug page of
the admin UI. Extend `/_status/stacks` to allow the optional retrieval
of thread stacks (vs the default of goroutine stacks). Enhance `debug
zip` to retrieve the thread stacks for each node.

Release note (admin ui change): Improve debuggability of C++-level
issues by providing access to thread stack traces via a new
`/debug/threads` endpoint which is exposed on the Admin UI advanced
debug page. Include thread stack traces in the info collected by `debug
zip`. Thread stack traces are currently only available on Linux.
Copy link
Collaborator Author

@petermattis petermattis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @knz and @tbg)


c-deps/libroach/db.cc, line 813 at r3 (raw file):

Previously, knz (kena) wrote…

Did you intend to keep this?

Oops. No. Removed.


c-deps/libroach/stack_trace.cc, line 67 at r2 (raw file):

Previously, knz (kena) wrote…

I went and check at the source code of GNU libc and BSD libc and neither contains EINTR retry loops on the directory functions. They do use open/read/etc under the hood though. Obviously given these are not real files, the time window at which a signal could be received is very small, but the race condition is still theoretically there.

Agreed the race condition is there. I've wrapped these calls in EINTR retry loops, and a few other places where system calls were being done without retry loops.


c-deps/libroach/stack_trace.cc, line 160 at r2 (raw file):

Previously, knz (kena) wrote…

Here's the code in os_linux.go:

//go:nosplit
//go:nowritebarrierrec
func setsig(i uint32, fn uintptr) {
        var sa sigactiont
        sa.sa_flags = _SA_SIGINFO | _SA_ONSTACK | _SA_RESTORER | _SA_RESTART
        sigfillset(&sa.sa_mask)
        // Although Linux manpage says "sa_restorer element is obsolete and
        // should not be used". x86_64 kernel requires it. Only use it on
        // x86.
        if GOARCH == "386" || GOARCH == "amd64" {
                sa.sa_restorer = funcPC(sigreturn)
        }
...

So as you see 1) there's sa_restorer funkiness 2) sa_flags is different from yours.

Here's another example thing that was happening in sys_darwin.go:

//go:nosplit
//go:nowritebarrierrec
func setsig(i uint32, fn uintptr) {
        var sa sigactiont
        sa.sa_flags = _SA_SIGINFO | _SA_ONSTACK | _SA_RESTART
        sa.sa_mask = ^uint32(0)
        sa.sa_tramp = unsafe.Pointer(funcPC(sigtramp)) // runtime·sigtramp's job is to call into real handler
        *(*uintptr)(unsafe.Pointer(&sa.__sigaction_u)) = fn
        sigaction(i, &sa, nil)
}

This approach was actually catastrophic, because on Darwin it's not actually possible to retrieve the current value of sa_tramp with sigaction(), so the general idea of "get current settings, customize, then restore" does not work. This was biting us very hard in go-libedit.

I think this was changed in go 1.12/1.13 though for Darwin.

Thanks for the pointers to code. The only difference in the flags is the presence of SA_RESTORER. As far as I can tell, SA_RESTORER is something internal to glibc and will be populated by glibc. See http://man7.org/linux/man-pages/man2/sigreturn.2.html. The Go runtime is doing this itself because it is making direct system calls.


c-deps/libroach/stack_trace.cc, line 127 at r3 (raw file):

Previously, knz (kena) wrote…

I would certainly recommend reporting an error in this case. This sounds like a serious condition which we'd need to hear about (even perhaps record it in telemetry in Go).

Agreed this would be very surprising. I've added error messaging to this function. Actually recording this in telemetry would be extremely onerous, but at least we'll see the error in the stack dump.

Copy link
Contributor

@knz knz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 1 of 11 files at r3, 2 of 2 files at r4.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @tbg)

@petermattis
Copy link
Collaborator Author

TFTR, @knz and @tbg! If we see any problems with this during the stability period, I'm more than willing to revert.

@petermattis petermattis merged commit c9aeb37 into cockroachdb:master Mar 9, 2020
@petermattis petermattis deleted the pmattis/thread-stacks branch March 9, 2020 14:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants