Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow servers to send and receive messages directly #819

Merged
merged 5 commits into from Nov 29, 2018
Merged

Conversation

derekcollison
Copy link
Member

Added ability for servers to send and receive messages as their own entity.
Added account scoped events for connection created and connection closed.

Signed-off-by: Derek Collison derek@nats.io

/cc @nats-io/core

Signed-off-by: Derek Collison <derek@nats.io>
Signed-off-by: Derek Collison <derek@nats.io>
@coveralls
Copy link

coveralls commented Nov 29, 2018

Coverage Status

Coverage increased (+0.02%) to 92.206% when pulling 10eb491 on sys into 8a662a5 on master.

Copy link
Member

@kozlovic kozlovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concern about absence of locking in sendInternalMsg and noOutsideInterest last return value.

server/errors.go Outdated Show resolved Hide resolved
server/client.go Outdated Show resolved Hide resolved
server/events.go Outdated
acc := s.sys.account

// Prep internl structures needed to send message.
c.pa.subject = []byte(subj)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks to be that sendInternalMsg() can be called concurrently, but it looks here that we use a single client object stored in server struct. So there should be theoretically several go routines changing the c.pa.subject/size/etc..?
Did you mean to have this function be protected by some lock?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me look into it. c.pa not setup to be shared as you know but you raise a good point.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but in this case, we use client (which is s.sys.client) from various go routines, each time we want to produce a sys event. So this will be shared. Even the rest (calling processMsgResults, etc..) I think should be protected with a mutex.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand. I had to do the pa.(..) stuff at the end to make it work, I think I have a comment somewhere in there that say something about coordination around outbound seq, etc. I could create a new mutex or spin up a Go routine that does the actual sending. I think I prefer the second option since we don't know all that is getting locked below us in process msg results.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally would go with a dedicated lock, say in the internal structure. Spinning go routing to parse result, etc.. will cause out of order deliveries.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(maybe you meant a single go-routine). Also were you referring to the situation of when dealing with an event we could end-up generating another kind of event in reaction of the first?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would just simulate readLoop like we have for normal clients.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes just single Go routine.

server/events.go Outdated Show resolved Hide resolved
Signed-off-by: Derek Collison <derek@nats.io>
Signed-off-by: Derek Collison <derek@nats.io>
Copy link
Member

@kozlovic kozlovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unlock missing, and comment about being possibly blocked. but lgtm otherwise.

server/events.go Show resolved Hide resolved
server/events.go Outdated
if s.sys == nil {
return
}
s.sys.sendq <- &pubMsg{r, sub, msg}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use a select? Are we ok if we lose those messages?
As long as we know for a fact that there won't be any locking causing the "sendLoop" to be blocked by the code calling sendInternalMsg() when chan buffer is full, I guess we are ok, otherwise, use combination of buffered channel and select, or move to list.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its a buffered channel now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I saw it was a buffered of 128, but say that there is rapid generation of events that fill the buffered channel, then the sendloop is accessing a lock that another routine is holding trying to send to the channel. that's what I meant. If we know for sure that this situation will never occur, then we are fine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't hold any locks when placing on the sendq or pulling off.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a change though to make more consistent.

Signed-off-by: Derek Collison <derek@nats.io>
Copy link
Member

@kozlovic kozlovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@derekcollison derekcollison merged commit 2a19de7 into master Nov 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants