-
-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] updated disconnection logic and clean up unused noise #381
Conversation
344f4ea
to
810415e
Compare
app.js
Outdated
@@ -458,41 +428,31 @@ io.use(function (socket, next) { | |||
io.sockets.on('connection', function (socket) { | |||
logger.info('openHAB-cloud: Incoming openHAB connection for uuid ' + socket.handshake.uuid); | |||
socket.join(socket.handshake.uuid); | |||
// Remove openHAB from offline array if needed | |||
delete offlineOpenhabs[socket.handshake.uuid]; | |||
Openhab.findOne({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could replaced with pre = findOneAndUpdate(.., returnOriginal=true)
atomic operation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice find! Yes i think we could in fact use this method, and as an extra bonus, save ourselves an extra mongo call as well. I eliminated another expensive write to an unused collection in this function, so combined with this, both could really have a positive impact on performance when we have to restart a container and thousands of openHABs try and reconnect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
app.js
Outdated
} | ||
}); | ||
|
||
//actually would redis be better to store? How would we coordinate who send notification? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed is this notification sending a completely independent task? It could be triggered by
Regularly, checking redis for instances to notify?
Basically a completely different process.
Would then the offlineopenhabs be a redis backed object
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, that was a note i left to my self when running through the code, did not mean to check it in :-)
If i was starting from scratch or doing a major refactor, then yes i would be using redis (or some queue/messaging like backend) to persist this kind of stuff in a more distributed friendly fashion. I again restrained from rewriting too much to keep the changes small so if something does go wrong it's easier to debug, and quicker to get this out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, makes sense not to bundle it in here.
dd1b618
to
a074496
Compare
I ended removing a bunch of dead code that has never been used. Also i put a data cap on our notification log as that collection on mongo is uncapped and has grown to over 40gb on production for no good reason. |
a074496
to
310bb9f
Compare
Signed-off-by: Dan Cunningham <dan@digitaldan.com>
310bb9f
to
6ae1dca
Compare
For the record, this potentially resolves a race condition which leads to cloud thinking that client is offline even though it is connected. See #134 (comment) for more info Quoting from our 1on1 discussion
This PR utilizes unique IDs for sessions combined with Mongo atomic query/update commands to resolve the possible race conditions |
@digitaldan how to get this merged? |
Yeah, i have been procrastinating a little as I need to block off time to do a proper deployment to the general service once we merge. I also then need to remove a few unused (but very large) collections from mongo. I'll probably do that Sunday, probably will post something to the forums later today about the upcoming maintenance. |
Signed-off-by: Dan Cunningham dan@digitaldan.com