You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the issue
When running with JFR enabled, the CPU sampler produces more samples than what's specified in the configuration (.jfc) file.
The sampling rate is 'N' times the specified rate in the configuration file (.jfc). Where 'N' is the number of active threads. This creates potential redundancy and overhead that scales with the number of threads at any given time.
This is a recent change that looks like it was introduced with #8517. It seems like the intent of that PR was to give each thread it's own timer (on linux). In the implementation, when each timer expires, the SIGPROF signal is sent every time. Any thread can be stopped to handle any other thread's timer expiry SIGPROF. I am unsure of why each thread must have it's own dedicated timer.
Why does this PR (#8517) add extra sampling? Was the intention to connect each timer's expiry signal to it's respective thread? If not, why not just increase the frequency of the old itimer implementation? That would allow for a more even sample distribution. Currently, all the timers expire at roughly the same time every period so we have bursty sampling.
Describe the issue
When running with JFR enabled, the CPU sampler produces more samples than what's specified in the configuration (.jfc) file.
The sampling rate is 'N' times the specified rate in the configuration file (.jfc). Where 'N' is the number of active threads. This creates potential redundancy and overhead that scales with the number of threads at any given time.
This is a recent change that looks like it was introduced with #8517. It seems like the intent of that PR was to give each thread it's own timer (on linux). In the implementation, when each timer expires, the SIGPROF signal is sent every time. Any thread can be stopped to handle any other thread's timer expiry SIGPROF. I am unsure of why each thread must have it's own dedicated timer.
Why does this PR (#8517) add extra sampling? Was the intention to connect each timer's expiry signal to it's respective thread? If not, why not just increase the frequency of the old itimer implementation? That would allow for a more even sample distribution. Currently, all the timers expire at roughly the same time every period so we have bursty sampling.
Steps to reproduce the issue
Please include both build steps as well as run steps
Download latest EA build https://github.com/graalvm/oracle-graalvm-ea-builds/releases
We get more samples per period than specified in the configuration (1/s)
Describe GraalVM and your environment:
The text was updated successfully, but these errors were encountered: