Performance issues when looking up members of Lgi objects #48

Closed
zevv opened this Issue Apr 3, 2013 · 11 comments

Comments

Projects
None yet
5 participants
@zevv

zevv commented Apr 3, 2013

I'm using Lgi for creating animated graphs using cairo, and I've run into some performance issues. My app calls cairo's move_to() and line_to() functions quite a lot of times, which takes much longer then I expected.

Some investigation showed a lot of time is spent by Lgi looking up methods from objects. For example, one thousand calls of

cr:line_to(x, y)

takes about 20 msec on my PC. The time is not consumed by the function call into cairo, because one thousand times looking up the function without calling it:

local l =cr.line_to

takes only slightly longer.

My current workaround is to cache the functions in local variables and call those:

local l = cr.line_to
for thousands of times do
l(cr, x, y)
end

resulting in a speedup of over 20 times.

Would it be feasible to add some form of caching or memoization to the Lgi objects themselves, so that the user does not need to do this caching at application level? I've looked into the Lgi code, but since I am not at all familiar with the gobject introspection stuff, grokking the code will probably take me some time.

If adding memozation is not an option it might be a good idea to make a note of this performance penalty in the documentation to inform future Lgi users?

Thanks,

Ico

@zevv

This comment has been minimized.

Show comment
Hide comment
@zevv

zevv Apr 4, 2013

I've made a absolutely naive attempt to add some form of caching to component.mt:__index() and record.struct_mt:_element() by simple caching the result of these functions in a weak table. I have no idea if this is a sane thing to do by any means, but it does not seem to break things in my app, and speeds up lookups to over 250% in my particular case.

Is feel this might be worth looking into by someone with a bit more experience in this code base.

zevv commented Apr 4, 2013

I've made a absolutely naive attempt to add some form of caching to component.mt:__index() and record.struct_mt:_element() by simple caching the result of these functions in a weak table. I have no idea if this is a sane thing to do by any means, but it does not seem to break things in my app, and speeds up lookups to over 250% in my particular case.

Is feel this might be worth looking into by someone with a bit more experience in this code base.

zevv added a commit to zevv/lgi that referenced this issue Apr 4, 2013

zevv added a commit to zevv/lgi that referenced this issue Apr 4, 2013

zevv added a commit to zevv/lgi that referenced this issue Apr 4, 2013

@Elv13

This comment has been minimized.

Show comment
Hide comment
@Elv13

Elv13 Apr 4, 2013

Hi,

Just to add a little KCacheGrind profile trace that give an idea how many time LGI _mt is being called for enums in a single tag switch in awesome WM

https://www.dropbox.com/s/y7xp9t2diycooov/test.tar.xz

(spoiler, around 600 000 times)

So, please, someone, make caching work, at least for static enums.

Elv13 commented Apr 4, 2013

Hi,

Just to add a little KCacheGrind profile trace that give an idea how many time LGI _mt is being called for enums in a single tag switch in awesome WM

https://www.dropbox.com/s/y7xp9t2diycooov/test.tar.xz

(spoiler, around 600 000 times)

So, please, someone, make caching work, at least for static enums.

@pavouk

This comment has been minimized.

Show comment
Hide comment
@pavouk

pavouk Apr 8, 2013

Owner

Wow.

I agree that lookup of members in records/objects/enums is definitely not speed-optimized. I went according to golden optimization rule - don't do it, but now it is apparently time for the second part of the rule, which I forgot how exactly it is phrased :-)

No matter how good optimizations we can come up with, it will always be faster to cache the methods in local variable like Ico suggested in the first part of this report.

However, seeing the numbers, I agree that some caching is definitely needed here. Ico's patch looks very sensible, I'll give it a try and submit it as soon as possible if no problems with it will be found.

However, I'm a bit sceptical about Elv13's case - static fields of enums are already 'cached' - actually, they are present directly in the enum's table. mt:__index() is here used for backward mapping, i.e. calculating symbolic name (or even a set of symbolic names in the case of bitfields) from numeric value. I don't think that it is a good idea to actually cache these. I'll try to look more closely at this usecase too and see if I will be able to come up with something.

Thank you guys for great report and I apologize for communication latency :-(

Owner

pavouk commented Apr 8, 2013

Wow.

I agree that lookup of members in records/objects/enums is definitely not speed-optimized. I went according to golden optimization rule - don't do it, but now it is apparently time for the second part of the rule, which I forgot how exactly it is phrased :-)

No matter how good optimizations we can come up with, it will always be faster to cache the methods in local variable like Ico suggested in the first part of this report.

However, seeing the numbers, I agree that some caching is definitely needed here. Ico's patch looks very sensible, I'll give it a try and submit it as soon as possible if no problems with it will be found.

However, I'm a bit sceptical about Elv13's case - static fields of enums are already 'cached' - actually, they are present directly in the enum's table. mt:__index() is here used for backward mapping, i.e. calculating symbolic name (or even a set of symbolic names in the case of bitfields) from numeric value. I don't think that it is a good idea to actually cache these. I'll try to look more closely at this usecase too and see if I will be able to come up with something.

Thank you guys for great report and I apologize for communication latency :-(

@zevv

This comment has been minimized.

Show comment
Hide comment
@zevv

zevv Apr 9, 2013

Please do review at least twice and test a lot, because I honestly have no clue what I'm doing and was just fixing my particular case.

zevv commented Apr 9, 2013

Please do review at least twice and test a lot, because I honestly have no clue what I'm doing and was just fixing my particular case.

@pavouk

This comment has been minimized.

Show comment
Hide comment
@pavouk

pavouk May 6, 2013

Owner

I'm working with slightly generalized approach by caching lookups of component_mt:_element(). However, the problem is with invalidation of the caches; overrides often set up overriden version of methods, and the override is ignored when the original is found in the cache. I'm not sure how to solve this; either I can declare that it is overrider's duty to invalidate the cache (by simple Type._cache = nil) or trying to invent some sophisticated auto-invalidating mechanism (and complicate more already silly component.mt and descendants maze).

Owner

pavouk commented May 6, 2013

I'm working with slightly generalized approach by caching lookups of component_mt:_element(). However, the problem is with invalidation of the caches; overrides often set up overriden version of methods, and the override is ignored when the original is found in the cache. I'm not sure how to solve this; either I can declare that it is overrider's duty to invalidate the cache (by simple Type._cache = nil) or trying to invent some sophisticated auto-invalidating mechanism (and complicate more already silly component.mt and descendants maze).

@psychon

This comment has been minimized.

Show comment
Hide comment
@psychon

psychon May 7, 2013

Collaborator

Sorry for having no clue (so much magic involved!), but the overrides are saved in a lua table and so is the cache, right? Why do these have to be two different things?

I never managed to figure out the internals here, but I would expect something like an empty lua table with an __index metamethod which lazily adds members to the table when needed (so that __index is only ever called on the first call). This __index meta-method would look up the correct member from gi and overrides could just directly fill the table and thus bypass __index.

(Lua only calls __index if no entry for the given key already exists, right?)

Collaborator

psychon commented May 7, 2013

Sorry for having no clue (so much magic involved!), but the overrides are saved in a lua table and so is the cache, right? Why do these have to be two different things?

I never managed to figure out the internals here, but I would expect something like an empty lua table with an __index metamethod which lazily adds members to the table when needed (so that __index is only ever called on the first call). This __index meta-method would look up the correct member from gi and overrides could just directly fill the table and thus bypass __index.

(Lua only calls __index if no entry for the given key already exists, right?)

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Jun 15, 2013

This patch seems to break my awesome wm.
I have compiled lgi against luajit, but this fact should not make any difference.

awesome[26599]: W: awesome: luaA_dofunction:78: error while running function
awesome[26599]: stack traceback:
awesome[26599]: [C]: in function '__newindex'
awesome[26599]: /usr/share/awesome/lib/wibox/widget/textbox.lua:63: in function 'set_markup'
awesome[26599]: /usr/share/awesome/lib/naughty.lua:397: in function 'notify'
awesome[26599]: /home/joerg/.config/awesome/rc.lua:60: in function </home/joerg/.config/awesome/rc.lua:55>
awesome[26599]: [C]: at 0x00417790
awesome[26599]: [C]: at 0x7f9d6704dba0
awesome[26599]: [C]: in function '__newindex'
awesome[26599]: /usr/share/awesome/lib/wibox/widget/textbox.lua:63: in function 'set_markup'
awesome[26599]: /usr/share/awesome/lib/awful/widget/textclock.lua:26: in function </usr/share/awesome/lib/awful/widget/
awesome[26599]: [C]: in function 'emit_signal'
awesome[26599]: /usr/share/awesome/lib/awful/widget/textclock.lua:28: in function 'textclock'
awesome[26599]: /home/joerg/.config/awesome/rc.lua:375: in main chunk
awesome[26599]: error: bad argument #4 to '?' (number expected, got nil)
awesome[26599]: W: awesome: luaA_dofunction:78: error while running function
awesome[26599]: stack traceback:
awesome[26599]: [C]: in function '__newindex'
awesome[26599]: /usr/share/awesome/lib/wibox/widget/textbox.lua:63: in function 'set_markup'
Jun 15 17:47:05 turing-machine awesome[26599]: /usr/share/awesome/lib/awful/widget/textclock.lua:26: in function </usr/share/awesome/lib/awful/widget/
awesome[26599]: [C]: in function 'emit_signal'
awesome[26599]: /usr/share/awesome/lib/awful/widget/textclock.lua:28: in function 'textclock'
awesome[26599]: /home/joerg/.config/awesome/rc.lua:375: in main chunk
awesome[26599]: error: bad argument #4 to '?' (number expected, got nil)

EDIT: Here is the trace of awesome compiled against lua5.2:

awesome[4939]: W: awesome: luaA_dofunction:78: error while running function
awesome[4939]: stack traceback:
awesome[4939]: [C]: in function 'element'
awesome[4939]: /usr/share/lua/5.2/lgi/component.lua:287: in function </usr/share/lua/5.2/lgi/component.lua:272>
awesome[4939]: (...tail calls...)
awesome[4939]: [C]: in function '__newindex'
awesome[4939]: /usr/share/awesome/lib/wibox/widget/textbox.lua:63: in function 'set_markup'
awesome[4939]: /usr/share/awesome/lib/naughty.lua:397: in function 'notify'
awesome[4939]: /home/joerg/.config/awesome/rc.lua:60: in function </home/joerg/.config/awesome/rc.lua:55>
awesome[4939]: [C]: in function 'lognotify'
awesome[4939]: /home/joerg/.config/awesome/rc.lua:352: in main chunk
awesome[4939]: error: /usr/share/lua/5.2/lgi/component.lua:287: bad argument #4 to 'element' (number expected, got nil)
awesome[4939]: error while running function
awesome[4939]: stack traceback:
awesome[4939]: /home/joerg/.config/awesome/rc.lua:352: in main chunk
awesome[4939]: error: /home/joerg/.config/awesome/rc.lua:352: attempt to call global 'lognotify' (a nil value)
awesome[4939]: error while running function
awesome[4939]: stack traceback:
awesome[4939]: [C]: in function 'element'
awesome[4939]: /usr/share/lua/5.2/lgi/component.lua:287: in function </usr/share/lua/5.2/lgi/component.lua:272>

Mic92 commented Jun 15, 2013

This patch seems to break my awesome wm.
I have compiled lgi against luajit, but this fact should not make any difference.

awesome[26599]: W: awesome: luaA_dofunction:78: error while running function
awesome[26599]: stack traceback:
awesome[26599]: [C]: in function '__newindex'
awesome[26599]: /usr/share/awesome/lib/wibox/widget/textbox.lua:63: in function 'set_markup'
awesome[26599]: /usr/share/awesome/lib/naughty.lua:397: in function 'notify'
awesome[26599]: /home/joerg/.config/awesome/rc.lua:60: in function </home/joerg/.config/awesome/rc.lua:55>
awesome[26599]: [C]: at 0x00417790
awesome[26599]: [C]: at 0x7f9d6704dba0
awesome[26599]: [C]: in function '__newindex'
awesome[26599]: /usr/share/awesome/lib/wibox/widget/textbox.lua:63: in function 'set_markup'
awesome[26599]: /usr/share/awesome/lib/awful/widget/textclock.lua:26: in function </usr/share/awesome/lib/awful/widget/
awesome[26599]: [C]: in function 'emit_signal'
awesome[26599]: /usr/share/awesome/lib/awful/widget/textclock.lua:28: in function 'textclock'
awesome[26599]: /home/joerg/.config/awesome/rc.lua:375: in main chunk
awesome[26599]: error: bad argument #4 to '?' (number expected, got nil)
awesome[26599]: W: awesome: luaA_dofunction:78: error while running function
awesome[26599]: stack traceback:
awesome[26599]: [C]: in function '__newindex'
awesome[26599]: /usr/share/awesome/lib/wibox/widget/textbox.lua:63: in function 'set_markup'
Jun 15 17:47:05 turing-machine awesome[26599]: /usr/share/awesome/lib/awful/widget/textclock.lua:26: in function </usr/share/awesome/lib/awful/widget/
awesome[26599]: [C]: in function 'emit_signal'
awesome[26599]: /usr/share/awesome/lib/awful/widget/textclock.lua:28: in function 'textclock'
awesome[26599]: /home/joerg/.config/awesome/rc.lua:375: in main chunk
awesome[26599]: error: bad argument #4 to '?' (number expected, got nil)

EDIT: Here is the trace of awesome compiled against lua5.2:

awesome[4939]: W: awesome: luaA_dofunction:78: error while running function
awesome[4939]: stack traceback:
awesome[4939]: [C]: in function 'element'
awesome[4939]: /usr/share/lua/5.2/lgi/component.lua:287: in function </usr/share/lua/5.2/lgi/component.lua:272>
awesome[4939]: (...tail calls...)
awesome[4939]: [C]: in function '__newindex'
awesome[4939]: /usr/share/awesome/lib/wibox/widget/textbox.lua:63: in function 'set_markup'
awesome[4939]: /usr/share/awesome/lib/naughty.lua:397: in function 'notify'
awesome[4939]: /home/joerg/.config/awesome/rc.lua:60: in function </home/joerg/.config/awesome/rc.lua:55>
awesome[4939]: [C]: in function 'lognotify'
awesome[4939]: /home/joerg/.config/awesome/rc.lua:352: in main chunk
awesome[4939]: error: /usr/share/lua/5.2/lgi/component.lua:287: bad argument #4 to 'element' (number expected, got nil)
awesome[4939]: error while running function
awesome[4939]: stack traceback:
awesome[4939]: /home/joerg/.config/awesome/rc.lua:352: in main chunk
awesome[4939]: error: /home/joerg/.config/awesome/rc.lua:352: attempt to call global 'lognotify' (a nil value)
awesome[4939]: error while running function
awesome[4939]: stack traceback:
awesome[4939]: [C]: in function 'element'
awesome[4939]: /usr/share/lua/5.2/lgi/component.lua:287: in function </usr/share/lua/5.2/lgi/component.lua:272>
@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Jun 15, 2013

How does other language binding solve this problem?
EDIT:

pygobject cache the retrieved member too, so it should be possible:

https://github.com/GNOME/pygobject/blob/master/gi/module.py#L198

Mic92 commented Jun 15, 2013

How does other language binding solve this problem?
EDIT:

pygobject cache the retrieved member too, so it should be possible:

https://github.com/GNOME/pygobject/blob/master/gi/module.py#L198

@pavouk

This comment has been minimized.

Show comment
Hide comment
@pavouk

pavouk Oct 10, 2014

Owner

I think that some pretty good performance improvements in this area went in into 0.8.0 release. Has anyone tested whether the performance issue is still there, or can we consider this fixed?

Owner

pavouk commented Oct 10, 2014

I think that some pretty good performance improvements in this area went in into 0.8.0 release. Has anyone tested whether the performance issue is still there, or can we consider this fixed?

@Elv13

This comment has been minimized.

Show comment
Hide comment
@Elv13

Elv13 Oct 10, 2014

Well, it is much, much better indeed (I use the commit since the day you pushed it last year). I guess there is room for improvements, profiting still show LGI as the main bottleneck, but I guess this has to be expected to a certain point.

@pavouk Sorry for off-topicing, Can you please take a look at #76 I am working on a dbus related projects I and would really, really like to have your opinion on how to fix it.

Elv13 commented Oct 10, 2014

Well, it is much, much better indeed (I use the commit since the day you pushed it last year). I guess there is room for improvements, profiting still show LGI as the main bottleneck, but I guess this has to be expected to a certain point.

@pavouk Sorry for off-topicing, Can you please take a look at #76 I am working on a dbus related projects I and would really, really like to have your opinion on how to fix it.

@psychon

This comment has been minimized.

Show comment
Hide comment
@psychon

psychon Mar 23, 2017

Collaborator

No activity for 2.5 years and the last comment sounds like "good enough". Closing.

Collaborator

psychon commented Mar 23, 2017

No activity for 2.5 years and the last comment sounds like "good enough". Closing.

@psychon psychon closed this Mar 23, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment