Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Polling and action interaction #117

Closed
adamghill opened this issue Jan 20, 2021 · 9 comments
Closed

Polling and action interaction #117

adamghill opened this issue Jan 20, 2021 · 9 comments
Assignees
Labels
bug Something isn't working

Comments

@adamghill
Copy link
Owner

I'm having some issues which I'm not sure will be covered by this issue or if is a separate issues. I'm testing a combination of polling + manually modifying the state and have created, and have update the polling example in this branch:

<div unicorn:poll-1000="get_date">
    current_time: {{ current_time|date:"s" }}

    <button u:click="slow_update()">Increase counter: {{counter}}</button>
</div>
class PollingView(UnicornView):
    polling_disabled = False
    counter = 0
    current_time = now()

    def slow_update(self):
        self.counter += 1
        time.sleep(0.8)  # Simulate slow request

    def get_date(self):
        self.current_time = now()

I see that if I have slow requests (forced with a sleep in this example) and the polling is started just after the button click, the counter is not increased, as I guess the counter is overwritten my the state from the polling.

However, I'm not sure if the polling is an "action" so it fits here?

Originally posted by @frbor in #111 (comment)

@adamghill
Copy link
Owner Author

@frbor Thanks for the branch. I'll grab it and use it as a test case for some prototyping.

My first thought is maybe there is some reconciliation I can do for responses, but I only have the current state and the new state (not the whole history), so I'm not quite sure how to deal with a slow response. If I stored the component's data/render history in a cache I could tie a request/response to it and maybe merge the data that comes back or somehow handle it?

I'm also open to ideas about how this could possibly work or how you would expect this to work. :)

@adamghill adamghill self-assigned this Jan 20, 2021
@frbor
Copy link

frbor commented Jan 20, 2021

I've tried to wrap my head around this to decide how I want to work, but does not have really good solution.

One thought was that it could be possible to do only one request at a time and wait for the next response to after the first response is received, but this could make the UI slugish and there are other times you want to do parallell requests, so at least it should be configurable in some way.

It can also be that this can be mitigated partly with #113. From some of my testing some of the slow requests come from large request bodys (and response) so limiting what would be sent and recieved can really help out.

@adamghill
Copy link
Owner Author

That's good to know about #113 being useful for mitigating this issue. I can prioritize that as the next thing to tackle.

I was prototyping how to queue up parallel requests in #120. Not sure how I feel about it, but might be an ok approach.

Two other questions:

  • I assume this isn't possible, but could you split up the component that has polling and the long-running action (or nest one within the other)? Or do they need to be in the same component?
  • When you have seen this is it only on the local dev server or on a production server? I'm curious how things work with gunicorn or something similar that has multiple processes/threads to handle requests. I spent a little time converting the view function to async and switching over to ASGI. I could create a draft PR for that as well. Not sure if that would help actually now that I'm thinking about it... but, figured I would bring it up as another avenue I could try out to see if it would be helpful

@frbor
Copy link

frbor commented Jan 21, 2021

  1. I have not looked into nested components here, but currently the logic is coupled to tightly so need to make some changes to see whether that is doable
  2. The problem was seen on a production server using gunicorn (although still in "testing") and the slow request/response was the result of a large dataset and some latency. I replicated the problem locally as well, but then the issue was smaller, because of lower latency

@adamghill
Copy link
Owner Author

The current implementation of nested components is not optimized at all, so the payload coming back will include the child and the parent, so it'll definitely be bigger. It's on my list of things to optimize, but getting something released that worked seemed like a good first step. All that to say, from your other comments, nested components probably won't help in this case. Splitting up the components might, though.

Thanks for all of the useful information. It sounds like attacking #113 will be the best approach so that's what I'll try to get working for the next release with a possible experimental flag to enable queueing parallel requests if I can figure out a reasonable way to approach it.

@adamghill
Copy link
Owner Author

@frbor Just checking in to see if you were able to experiment with partial updates and if it helped with the latency? I think it still might be a good idea to implement queuing up parallel requests, but I wanted to make sure before jumping back into it.

@adamghill adamghill added the bug Something isn't working label Jan 29, 2021
@frbor
Copy link

frbor commented Jan 30, 2021

I ended up refactoring the page using a dynamic polling interval (with help from PollUpdate) and reducing the size of the page using the spaceless template tag to remove white space. These changes combined makes the problem barely noticeable although there might still be race conditions, but I have yet to observe them during testing.

I agree that queuing up parallel requests might be a nice addition, but for my specific use this is not a show stopper at the moment.

@adamghill
Copy link
Owner Author

Good to hear that PollUpdate was useful! I'm interested that spaceless was useful in reducing payload size. I guess I assumed that if payloads were gzipped the amount of whitespace wouldn't matter?

I did find a race condition while adding experimental support for queuing up requests into 0.18.0 which has now been fixed. If you do run into issues while testing, I would love any feedback if enabling serial requests helps mitigate any issues you see.

Closing this issue for now, but feel free to create a new issue if you run into other bugs.

@adamghill
Copy link
Owner Author

A potential solution I could implement would be a setting to use https://github.com/xfenix/django-hmin to remove all whitespace out of the HTML sent over the wire.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants