-
-
Notifications
You must be signed in to change notification settings - Fork 792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full in memory channel layer #863
Conversation
I will happily take this once it has unit tests (I'll write the documentation part) - you may want to reuse the |
I ported over the channel_redis tests to the inmemory layer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having looked over the code more, I'm not sure you need all the complexity that the Redis client has with receiving loops - you should be able to use asyncio Queue objects for most of the operations here and await on them natively (and there's no need to pack and unpack around things with !
in them as that's purely a network efficiency thing).
Do you think you'd be able to reduce it down to have less of the receive_loop
stuff? The version you're replacing was a lot simpler.
Ah yes, I wasn't aware of the asyncio queue. Do I understand you correctly that I should simply use the raw channel name for the queue key? |
You can leave the |
Alright, can you look at it now. |
channels/layers.py
Outdated
In-memory channel layer implementation for testing purposes. | ||
""" | ||
''' | ||
Our own in memory layer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean to change the docstring?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No that was left in there from testing I changed it to "In-memory channel layer implementation" since it can be used for more than testing now
queue = self.channels.setdefault(channel, asyncio.Queue()) | ||
|
||
# Do a plain direct receive | ||
_, message = await queue.get() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add something here to clean up the queue entry in the self.channels
dict if it's now empty.
channels/layers.py
Outdated
""" | ||
''' | ||
In-memory channel layer implementation | ||
''' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change these back to """
- trying to keep everything consistent!
channels/layers.py
Outdated
self.channels = {} | ||
self.groups = {} | ||
self.thread_lock = threading.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure a threading.Lock
makes sense here - you shouldn't need it since you're writing async code. Did you find a reason to add it?
channels/layers.py
Outdated
self._remove_from_groups(channel) | ||
# Is the channel now empty and needs deleting? | ||
if not queue: | ||
del self.channels[channel] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method should also clean expired group memberships separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by that exactly?
Should I remove the channel from all groups if not queue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to go through the group memberships and expire the entries if their expiry time is bigger than group_expiry
(there should be a test for this too, but I'll add that later). The Redis one does it here: https://github.com/django/channels_redis/blob/master/channels_redis/core.py#L316
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good - just a few small cleanups, but this is getting close to merge! Thanks for your work so far.
Great, I will get on these cleanups later today. |
It's all coroutines. Some requests will end up using threads if they're based on |
Thanks for the explanation. |
No, because they're async requests, as long as they |
Ok I think I got it now |
channels/layers.py
Outdated
@@ -267,7 +266,8 @@ def _clean_expired(self): | |||
for group in self.groups: | |||
for channel in self.groups.get(group, set()): | |||
# If join time is older than group_expiry end the group membership | |||
if self.groups[group][channel] and int(self.groups[group][channel]) < (int(time.time()) - self.group_expiry): | |||
if (self.groups[group][channel] and | |||
int(self.groups[group][channel]) < (int(time.time()) - self.group_expiry)): | |||
del self.groups[group][channel] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe shortcut local variable will be better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a timeout variable
Looks good! I am ready to merge this if you are, |
Lines 268 to 272 in d6643d9
In python3, looping over a dict/set defaults to iterators, and _clean_expired() raises an exception, as changing the dict/set is not allowed during iteration. Error below:
|
I just pushed up a commit to fix that. |
I would like to point out some additional issues. Lines 261 to 263 in b58c213
Previously (before asyncio), self.channels had been using a collections.deque, for which the following is valid to check if queue is empty:
However, the truth value of a queue.Queue or asyncio.Queue cannot be used to check whether the queue is empty. It will always return True and the channel is never deleted. The correct statement is as follows:
With the above corrected statement effectively purging channels from the self.channels dict, it also starts to delete non-expired (empty, but still active) channels. This breaks InMemoryChannelLayer again.
|
Even with the above fixes, the InMemoryChannelLayer keeps leaking channel objects in self.channels {}, when websocket connections are coming and going over time. The group_discard() method is called, which is correct, but channels with an empty queue keep hanging around. I suggest the following fixes to _group_expire(), please comment if I missed something: |
Can you open a separate issue to track this? It's getting complex enough it needs one. It's also worth noting that the in-memory channel layer is really only meant for testing, so fixes to its performance/longevity are going to be lower priority than some other bugs in the queue! |
I am fully aware that the in-memory channel layer is for debug only, however -imho- for applications on a lightweight system such as raspberry pi, I think it is a reasonable alternative. |
Right, just saying that fixing this is going to come below the 6 or so other bugs I have to look at at the moment! |
Hi,
I added the full in memory layer back to channels 2.
We always used it in channels 1 for local testing.
Can you merge it into the code?
Best and thanks
Sven