Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign upImprovements in Mapper Performance #153
Comments
This comment has been minimized.
This comment has been minimized.
@tomjoseph83 can you provide some information here about what the problem is and how you plan to address it? How do we know that we have a performance issue? Has anyone quantified it? What strategies do we have for improving it? When can we close this issue (i.e. what's our threshold for having "fixed" the performance issue)? |
This comment has been minimized.
This comment has been minimized.
I'm going to answer my own questions here, because I've already invested a bunch of time in them. How do we know we have a performance issue? Has anyone quantified it?There were reports of OpenPOWER host systems issuing hard lockup warnings and sometimes panicking in their normal reboot path. OPAL/skiboot had already requested the system be reset, so the problem was on the BMC side. I used The events of the host reboot begin just to the left of the middle of the picture. The Python process PID 1021 is the mapper process (6th process bar from the top - the thick blue section at the very top represents the processor itself), and is the process hogging the processor for most of the power-off phase of the reboot. Counting the chart's major ticks covered by the power-off phase, we can see it consumes roughly 9 seconds of the reboot process. This is perilously close to the 10 second period used by the soft watchdog on the host to detect lockups, so it's not surprising that with some variance in BMC behaviour we will trigger the lockup warnings on the host. When can we close this issue?I think a fairly good indicator of the mapper being improved is being well away from triggering the hard-lockup warnings on the host. Taking at most half the hard-lockup detection period feels like a decent goal. What strategies do we have for improving it?As outlined by #1661 we can rewrite the mapper to be a native (C++) application and shed all the overhead associated with Python and its interpreter. @edtanous has made some progress, as outlined in his openbmc/openbmc#2813 (comment) post. However, the mapper has a very thorough and determined lack of unit or integration tests, and so faithfully re-implementing its behaviour is always going to be a challenge. To that end we can try to apply surgical tweaks, but to do so we need to understand where we are losing all our performance. To that end I've ported pyflame to 32-bit ARM, added support for prelinked shared libraries, and written a bitbake recipe that has been sent upstream. The flamegraph for the mapper across a host reboot looks like this: (source: pyflame_host-reboot.log.svg.zip) Exploring the flame graph (grab the zip, extract the svg and open it in a browser for interactive exploration) we find that To these ends I've developed two patch series, one for openbmc/pyphosphor to improve the performance of PathTree, and another for openbmc/phosphor-objmgr to improve the performance of OutcomesThe two series above reduce the time between receiving the reboot request to removing power from the chip from ~9 seconds to ~4 seconds. It's hard to provide a comparative screenshot of the flamegraph, so I've just attached the source: pyflame_host-reboot_fixed.log.svg.zip The It's observed that the mapper is now taking significantly less CPU cycles to do its work. |
This comment has been minimized.
This comment has been minimized.
Recently, we've found some pretty easy answers to the above "do we have a performance problem?" question. |
This comment has been minimized.
This comment has been minimized.
Closing using the C++ mapper |
Thursday Feb 01, 2018 at 07:07 GMT
Originally opened as openbmc/openbmc#2860