Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ApplicationMessenger separate queue and target locking #15356

Closed
wants to merge 1 commit into from

Conversation

DaveTBlake
Copy link
Member

@DaveTBlake DaveTBlake commented Jan 28, 2019

Optimize locking in ApplicationMessenger by using separate locks for queue from that for protecting targets.

Edit: "optimise" was a bad choice of word, it implied far more than I intended

@DaveTBlake DaveTBlake added Type: Fix non-breaking change which fixes an issue v18 Leia labels Jan 28, 2019
@yol
Copy link
Member

yol commented Jan 29, 2019 via email

@DaveTBlake DaveTBlake added this to the Leia 18.1-rc1 milestone Feb 4, 2019
@MartijnKaijser
Copy link
Member

Should this be merged?

@DaveTBlake
Copy link
Member Author

It has been fine in a test build, so I would say yes merge

@DaveTBlake
Copy link
Member Author

This has also been in Millhouse and tested separately. The gains may be small, but good to go into v18.1

}
if (target != nullptr)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You did the same mistake again. A copy is worth nothing. At the time you use the pointer, the target might have disappeared already.
(I know that there is no Unregister atm, but this needs fixing)

I bet you can't show any measurable improvement of this change.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FernetMenta the copy is not new, just the way the lock applied. Other guys suggested a loop change and that seemed reasonable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the lock (used whren register etc,) is not sufficient to protect the list of targets then what is?
My change is simply to use a separate lock for adding events to the queue, no reason for sending messages to prevent other messages being queued.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dispatching massages here is done by the main/render thread. Nothing must block or hijack this thread for processing intensive tasks. Hence the benefit of having separate locks is nil and I bet that you can't prove the opposite. Even if you could measure any improvement, it would just show that some other code blocks the main thread.
In other words, you code for something that must not happen. In addition you don't fix the obvious errors that is leaving the lock for dispatching.

Copy link
Member Author

@DaveTBlake DaveTBlake Feb 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hence the benefit of having separate locks is nil and I bet that you can't prove the opposite.

On a very slow system I did see improvements. Does that mean some hidden issue elsewhere, I don't know.

In other words, you code for something that must not happen.

Does that mean is does not happen? Can you prove it?

In addition you don't fix the obvious errors that is leaving the lock for dispatching.

You blame me for not fixing enough? Thanks, but when did I become solely responsible. Please either be constructive or butt out

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FernetMenta AFAIK I have not added errors (check the original code), but if more can be done then do share what.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It must not happen <- if it happens, fix the root cause and don't introduce more complexity via an additional lock.
If something can be done without the lock, please keep using SingleExit.

While your communication is not optimal - the code change is - in my eyes - not an optimization because "optimization of locking" does not mean add more fine grain locking, but REMOVE not needed locking ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarify: "your" <- yours and fernet's communication.

@MartijnKaijser MartijnKaijser removed this from the Leia 18.1-rc1 milestone Feb 10, 2019
@Rechi Rechi removed the v18 Leia label Apr 16, 2019
@DaveTBlake DaveTBlake added Type: Improvement non-breaking change which improves existing functionality v19 Matrix WIP PR that is still being worked on and removed Type: Fix non-breaking change which fixes an issue labels Jun 11, 2019
@DaveTBlake
Copy link
Member Author

This has been in the Millhouse builds since Jan without causing any problems. Holding it back from v18 RC1 may have been overly cautious afterall, but I respect the philosophy.

My choice of title was bad - "optimise" was not intended literally, and I have no proof this performs better than before - so I have rephrased. Lack of performance evidence not withstanding, I still think that it is better design to use separate locks for queue from that for protecting targets.

In addition you don't fix the obvious errors that is leaving the lock for dispatching.

I don't understand what this is refering to, and accept I don't know enough about these aspects of Kodi to dare to meddle, but it makes me think it is worth leaving this open as WIP to see if greater minds than mine have something to add.

@jimfcarroll perhaps you can take a look on the back of your threading changes?

@DaveTBlake DaveTBlake changed the title Optimize locking in ApplicationMessenger ApplicationMessenger separate queue and target locking Jun 11, 2019
@jenkins4kodi jenkins4kodi added the Rebase needed PR that does not apply/merge cleanly to current base branch label Jun 21, 2019
@jenkins4kodi
Copy link
Contributor

@DaveTBlake this needs a rebase

@phunkyfish
Copy link
Contributor

@DaveTBlake, do you mind if I apply the backburner label and close? Or would you rather keep it open?

@phunkyfish phunkyfish added the PR Cleanup: Ask Author PR appears to still be relevant and mergeable. Comment to author to check on status. label Mar 8, 2020
@DaveTBlake DaveTBlake closed this Mar 9, 2020
@DaveTBlake DaveTBlake deleted the AppMsgOptimise branch March 9, 2020 07:25
@DaveTBlake DaveTBlake removed PR Cleanup: Ask Author PR appears to still be relevant and mergeable. Comment to author to check on status. Rebase needed PR that does not apply/merge cleanly to current base branch WIP PR that is still being worked on labels Mar 9, 2020
@Rechi Rechi removed the v19 Matrix label Mar 9, 2020
@phunkyfish phunkyfish added the PR Cleanup: Author Closed PR closed on the authors request. label Mar 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PR Cleanup: Author Closed PR closed on the authors request. Type: Improvement non-breaking change which improves existing functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants