Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM event loop for dispatching events such as cache resets, object allocations and more #3490

YorickPeterse opened this issue Aug 25, 2015 · 3 comments


Copy link

@YorickPeterse YorickPeterse commented Aug 25, 2015

Currently the VM has a bunch of places where based on a certain condition an extra "event" is triggered. For example, whenever System::vm_inc_global_serial() is called it will log whenever -Xserial.debug is set. Another example is the object allocation code emitting dtrace/systemtap events. A third example would be signal handlers. In almost all (if not all) cases this logic is hardcoded in the C++ side of things. This means that there's no way to subscribe to these events from Ruby.

Ideally there's some sort of event loop to which the C++ VM publishes events. Ruby code in turn can subscribe and act upon these events. Such a system could then be used as the building blocks for #3488 . @brixen and I have discussed this on a few occasions in the past, but to date no concrete proposal has been drafted.

As a mockup I'd envision usage of this system to be something along the lines of the following: do |signal_name|
  case signal_name
  when 'USR1'
  when 'TERM'
end, 'TERM')

There are a few tricky things here to consider:

  1. VM code would have to call back in Ruby. Since VM code can run before Ruby code is fully set up this could mean that events are fired before ever exists.
  2. To not degrade performance the subscribers should be executed in a single background thread. Using one thread per handler is also an option, but this could put too much strain on the system.
  3. When passing objects from a producer to the subscribers, said objects should be visible to the GC. We don't want junk to be passed around because the GC accidentally collected an object.

A bunch of use cases for this that I can think of from the top of my head:

  • Tracking object allocations
  • Tracking method cache resets
  • Tracking constant cache resets
  • Emitting events to statsd and friends (this might even allow us to move this logic from C++ to Ruby)
  • Signal handling
Copy link
Member Author

@YorickPeterse YorickPeterse commented Sep 23, 2015

Something I'd also really like to see is a system to track object allocations (per class) on a per method basis. This would allow one to profile a block of code and see which methods trigger the most object allocations. This information could then in turn be used to see what the impact of this would be on the garbage collector (and any stop-the-world events that might occur).

Copy link
Member Author

@YorickPeterse YorickPeterse commented Sep 23, 2015

The rationale for the above is a case such as the following: you have code where you know lots of object allocations happen (because you wrote the code), but you want definitive proof it actually has some form of performance impact. Code creating millions of Strings is one thing, but if that results in a slowdown of N (where N is a significant amount) then it suddenly becomes a lot more interesting.

Copy link

@brixen brixen commented Jan 4, 2020

In general, Rubinius enthusiastically supports experimentation and welcomes ideas for features that enhance developer ability to create amazing software.

The focus for Rubinius in the near term is on the following capabilities:

  1. Instruction set
  2. Debugger
  3. Profiler
  4. Just-in-time compiler
  5. Concurrency
  6. Garbage collector

Contributions in the form of PRs for any of the areas of focus above are appreciated. Once those core capabilities are more robust, it will be possible to support more efforts around experimentation.

@brixen brixen closed this Jan 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants