Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow multiple instances of IPCMessageSubscriber in one process #51963

Merged
merged 8 commits into from
Mar 27, 2019

Conversation

DmitryKuzmenko
Copy link
Contributor

@DmitryKuzmenko DmitryKuzmenko commented Mar 4, 2019

What does this PR do?

Updates IPCMessageSubscriber to be instantiated and read simultaneously. To do this I'm introducing an IPCMessageSubscriberService that is the old IPCMessageSubscriber singleton. It's instantiated once and it controls that the socket reading is started once any subscriber want's to read data and stopped when there still no subscribers.
IPCMessageSubscriber is subscribing to data read by the service and reading the data from the async queue not dire

What issues does this PR fix or reference?

Fixes: #49147

Previous Behavior

The second parallel try to read data with IPCMessageSubscriber raises socket error while another reading is in progress.

New Behavior

IPCMessageSubscriber can be used anywhere without taking care of other parts of client code.

Tests written?

Yes, regression.

Commits signed with GPG?

Yes

salt/transport/ipc.py Show resolved Hide resolved
salt/transport/ipc.py Show resolved Hide resolved
salt/transport/ipc.py Show resolved Hide resolved
Copy link
Contributor

@twangboy twangboy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a documentation change... if you wanna

salt/transport/ipc.py Outdated Show resolved Hide resolved
@DmitryKuzmenko
Copy link
Contributor Author

@dwoz, @s0undt3ch are you ok with this now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants