Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high cpu usage with abaco source #270

Open
ggggggggg opened this issue Apr 25, 2022 · 2 comments
Open

high cpu usage with abaco source #270

ggggggggg opened this issue Apr 25, 2022 · 2 comments

Comments

@ggggggggg
Copy link
Collaborator

I'm running in bluefors with a reported 2x64 (actually should be 2x32 I think) abaco source with a reported data rate of 32.3MB/s and 4.096 us/sample. I'm running edge multi triggers and off files and top reports 250-300% cpu usage for dastard.

This compares to about 200% cpu usage for a tdm system of 8x32 at 200 MB/s, which should be doing more work than the above config. Likely culprits include parsing all the packet headers and biased unwrapping.

@joefowler
Copy link
Member

I am seeing the same problem, but thanks for the specifics. This will be an important priority in the very near future.

@joefowler
Copy link
Member

We'll plan to do a comparison head to head between µMUX with Abaco+UDP vs a TDM system at comparable data rates. (If a TDM has 200 MB/sec, but only half that is from feedback channels, is a "comparable" µMUX system running at 100 or 200 MB/sec?)

Things to investigate:

  1. How does the load reported by top change when you change the # of channels? How about the profile reported in the go profiling tool?
  2. I think that the AbacoSource.readerMainLoop() shows that we're doing quite a few activities in a single goroutine. The most important might be the group.demuxData(...) call, which we've seen to cause some 10-20% of all CPU time (according to the profiler, at least). If necessary, maybe we can think up a way to parallelize the demux step, sharing among maybe one goroutine per channel group?
  3. EMT triggering algorithms can probably be made more compute-efficient. (But we're not there yet!)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants