Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fasterthanli.me - Live reloading article #281

Closed
InBetweenNames opened this issue Dec 9, 2023 · 2 comments
Closed

fasterthanli.me - Live reloading article #281

InBetweenNames opened this issue Dec 9, 2023 · 2 comments

Comments

@InBetweenNames
Copy link

InBetweenNames commented Dec 9, 2023

Hi fasterthanli.me!

Truly excellent article you've written about hot reloading in Rust, and I feel funny filing an "issue" about it :) but I just wanted to point out a small nitpick on it:

https://fasterthanli.me/articles/so-you-want-to-live-reload-rust

In your article, you write the following code:

  printf("[%x] registering destructor\n", thread_id());
  __cxa_thread_atexit_impl(dtor, NULL, __dso_handle);  

Actually, this code technically violates the contract of __cxa_thread_atexit_impl for a perhaps surprising reason:

1 - The function is intended to be used only with C++ code which means it is indeed a C++ destructor being registered and so...
2 - The obj parameter for this function actually is the this pointer for the destructor in question, which may never be NULL!

So, by passing NULL in as obj, the function isn't actually being used in a way realistic to how a C++ or Rust program would invoke it. In your example, you use this to show that the destructor should be run on the thread it was registered on, otherwise you'll see the wrong data:

__thread data_t *data = NULL;
//...
void dtor(void *p) {
  printf("[%x] dtor called! data = %p\n", thread_id(), data);
  free(data->array);
  free(data);
  data = NULL;
}

// if dtor runs on a different thread, data binds differently

While true, a more realistic example would look something like the following:

void dtor(void *p) {
  data_t *q = (data_t*) p;
  printf("[%x] dtor called! data = %p\n", thread_id(), q);
  free(q->array);
  free(q);
}
//... here, data is bound in the current thread and passed to __cxa_thread_atexit_impl,
//  where it will become the 'p' parameter in dtor later
  printf("[%x] registering destructor\n", thread_id());
  __cxa_thread_atexit_impl(dtor, data, __dso_handle);  

This is something a lot closer to what you'd see a C++ (or Rust) compiler emit (with a bunch of other TLS related stuff omitted).
And actually, for this example to fail when running the destructor on another thread, the destructor itself would have to refer to some other TLS memory. This is allowed, but I expect it would be really weird to see! Globals I'd expect, but TLS? Still, the spec is the spec, and you should definitely run your destructors on the same thread :)

Anyway I really enjoyed the article, thank you very much!

@fasterthanlime
Copy link
Owner

Hi fasterthanli.me!

No dot for me, I'm not a domain name 😌

I added a note linking to this comment + thanking you for the insight!

@InBetweenNames
Copy link
Author

Hah, apologies for that!

One last thing to make you aware of, the above situation with TLS destructors referring to other TLS variables does indeed happen in Rust's very own standard library as it turns out. Observe:

https://github.com/rust-lang/rust/blob/3cbb93223f33024db464a4df27a13c7cce870173/library/std/src/sys/thread_local/fast_local.rs

So in here, each thread local actually gets its own STATE thread local assigned to it which is used to determine whether the thread local has been initialized or not. If this gets run on a different thread, the STATE variable will refer to the wrong place (as it is directly referred to as a thread local and not through a ref) and funny things will happen.

Anyway, I think that just about bottoms out the rabbit hole. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants