You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was thinking that the best way to implement waiting for new data would be to add new HTTP command, for example FETCH. So when you do a GET request with start time interval parameter, you get all data from that interval to current time. If there is no data it returns you an empty list. But by doing FETCH to same URL, django-datastream would wait until data is available and then return you the data. And after it returns the data, you could open another connection with updated start time interval parameter.
Of course, it is not smart to block Django process while both client and server are waiting for data, so another approach should be used. I see two options:
implement custom Tornado based web server which handles FETCH requests in a asynchronous manner, similar to how django-pushserver implements it, we can then use this server instead of fast cgi process to interact with Django
instead of reimplementing Tornado integration, just use django-pushserver directly, redirecting on FETCH request to django-pushserver channel and requesting that new data is send to that channel
The tricky thing here is scalability, when you have multiple instances of this asynchronous server, you want that when data comes, that it is pushed to all clients listening on all instances of an asynchronous server (I am not sure if this is handled correctly in django-pushserver even).
The text was updated successfully, but these errors were encountered:
Decided not to go for this. Django HTTP interface will be only pure REST. If we need some dynamic one, we could make another HTTP interface with for example Meteor and DDP.
I was thinking that the best way to implement waiting for new data would be to add new HTTP command, for example FETCH. So when you do a GET request with start time interval parameter, you get all data from that interval to current time. If there is no data it returns you an empty list. But by doing FETCH to same URL, django-datastream would wait until data is available and then return you the data. And after it returns the data, you could open another connection with updated start time interval parameter.
Of course, it is not smart to block Django process while both client and server are waiting for data, so another approach should be used. I see two options:
The tricky thing here is scalability, when you have multiple instances of this asynchronous server, you want that when data comes, that it is pushed to all clients listening on all instances of an asynchronous server (I am not sure if this is handled correctly in django-pushserver even).
The text was updated successfully, but these errors were encountered: