Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Info stream #336

Merged
merged 11 commits into from
Jun 3, 2019
Merged

Info stream #336

merged 11 commits into from
Jun 3, 2019

Conversation

umputun
Copy link
Owner

@umputun umputun commented Jun 3, 2019

This one adds API to stream minimal updates for a given post. The goal is to provide UI with a way to subscribe to such update events and be able to refresh or notify the user (see #253)

The API is /stream/info and it allows smth like this:

curl -n "http://127.0.0.1:8080/api/v1/stream/info?site=remark&url=https://remark42.com/demo/"

{"url":"https://remark42.com/demo/","count":19,"first_time":"2018-09-02T13:27:48.680420472-05:00","last_time":"2019-05-12T13:11:50.548327608-05:00"}
{"url":"https://remark42.com/demo/","count":20,"first_time":"2018-09-02T13:27:48.680420472-05:00","last_time":"2019-06-02T15:29:27.799347624-05:00"}
{"url":"https://remark42.com/demo/","count":21,"first_time":"2018-09-02T13:27:48.680420472-05:00","last_time":"2019-06-02T15:29:50.941572771-05:00"}
{"url":"https://remark42.com/demo/","count":20,"first_time":"2018-09-02T13:27:48.680420472-05:00","last_time":"2019-06-02T15:29:50.941572771-05:00"}
...

note: notification sent on the new/updated comment as well as on removed comments, so count may be decremented or incremented

The stream can be terminated on timeout (no new comments for some time (currently 15 minutes) or by client-side closing the connection. The integration with frontend should start such a call on a comment page, listen to updates and reconnect as needed. It will be also nice to run on the active page only.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.007%) to 84.948% when pulling 3bb0ede on info-stream into bccf716 on master.

@umputun umputun merged commit ca083f4 into master Jun 3, 2019
@umputun umputun deleted the info-stream branch June 3, 2019 16:34
@umputun umputun mentioned this pull request Jun 3, 2019
@Reeywhaar
Copy link
Collaborator

I've stumbled upon SSE just now, what do you think of it? https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events

You made basically the same, except to support it I have to reinvent frontend support which SSE already has.

Also, is there a way to ask for comments since time? In case of network error I can miss some of the comments, and then receive replies to them.

@umputun
Copy link
Owner Author

umputun commented Jun 18, 2019

sure, making it frontend-friendly should be easy. Probably we can also combine both events (info stream and last comments stream into a single "/events" endpoint) and allow partial subscription via query.

regarding since parameter - not sure what you asking. Streams (events) are "since now", i.e. from the moment request received. Some regular calls have since parameter already, for example, GET /last/

@Reeywhaar
Copy link
Collaborator

Reeywhaar commented Jun 18, 2019

I mean use case:

  • I started to listen /stream/info
  • I lost network (wifi, lte issues)
  • Network reappeared, I reconnect. And there is a problem that there was window when network was lost, but comments could have been made, so I miss them and start to receive replies to these comments. Solution is to either add since parameter to stream, or to regular /api/v1/find so I can fetch comments from the moment I have lost network.

...or add "&url=post-url" to /api/v1/last, so it will show only comments for particular post, not for site id

umputun added a commit that referenced this pull request Jun 18, 2019
@umputun
Copy link
Owner Author

umputun commented Jun 18, 2019

I have changed both streams, take a look if UI consumes it

umputun added a commit that referenced this pull request Jun 18, 2019
@umputun
Copy link
Owner Author

umputun commented Jun 18, 2019

makes sense. Added an optional query param "since=unix-ts-msec" to both streaming calls

@Reeywhaar
Copy link
Collaborator

Reeywhaar commented Jun 18, 2019

Hm, it seems to barely working. It connects too long (about 10 seconds to 2 minutes, yikes), receives error after ~2 minutes, then again slowly reconnects. I've attached zipped html test file which connects to http://127.0.0.1:8080/api/v1/stream/info?site=remark&url=https%3A%2F%2Fremark42.com%2Fdemo%2F so you can check yourself. I don't know, if it something wrong with server, or just api is this laggy, and we should use something else.

Screenshot 2019-06-18 at 23 30 46

index.html.zip

CURL seems to connect fine, but then again errors out with curl: (18) transfer closed with outstanding read data remaining

@umputun
Copy link
Owner Author

umputun commented Jun 18, 2019

It connects too long (about 10 seconds to 2 minutes, yikes)

I don't get. What "connects too long" means? Technically it accepts connects right away and holds it open for some time. Maybe your client waits till connection closed?

then again slowly reconnects

What exactly slowly reconnect? Is it about frontend http client? I don't have anything related to reconnects.

but then again errors out with curl: (18) transfer closed with outstanding read data remaining

curl will eventually fail on timeout due to inactivity. The default one is 15m on my side, but as far as I recall curl itself has timeout as well. I think it is --max-time and not sure what the default is.

@Reeywhaar
Copy link
Collaborator

Reeywhaar commented Jun 18, 2019

It connects too long (about 10 seconds to 2 minutes, yikes)

I mean this is a time between EventSource call started and time it receives open event. I guess open event comes right after it received all HTTP headers and \n\n. By the way, I see you not sending headers until first event appeared.

then again slowly reconnects

Is it about frontend http client?

Yes, EventSource api reconnects automatically after 3 seconds by default; can be changed (See retry field) on server.

curl will eventually fail on timeout due to inactivity. The default one is 15m on my side, but as far as I recall curl itself has timeout as well. I think it is --max-time and not sure what the default is.

Maybe you right. But I noticed that I get this error response exactly after I post some comment.

Also, I've found demo that seems to work fine with sse, and js there shows no magic, it's same: https://jsbin.com/vigehujara/edit?html,js,output

curl -v -H "Accept: text/event-stream" https://sse.now.sh/

Curl doesn't timeout there.

Unfortunately, I see no sources for its backend, so can't suggest anything concrete.

@umputun
Copy link
Owner Author

umputun commented Jun 18, 2019

From what I can see on my side the actual disconnect initiated by the client, I'm getting "stream closed by a remote client, context canceled". Probably this is addressed with setting keep-alive header. Another suspicious thing was a global server-level write timeout.

With this pair of fixes, the example you gave me looks more stable. Pls take a look and let me know if you like or hate it ;)

@Reeywhaar
Copy link
Collaborator

Reeywhaar commented Jun 30, 2019

hmmm, It still behaves a bit odd:

Trying to get last comments stream and you can see that it:

  • connects too long (I started typing sequential numbers starting from 1 in comments after I launched curl)
  • skips comments (see text):
curl -v http://127.0.0.1:8080/api/v1/stream/last\?site\=remark
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET /api/v1/stream/last?site=remark HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< App-Name: remark42
< App-Version: local
< Author: umputun
< Cache-Control: no-cache
< Connection: keep-alive
< Content-Type: text/event-stream
< Expires: Wed, 31 Dec 1969 18:00:00 CST
< Pragma: no-cache
< Vary: Origin
< X-Accel-Expires: 0
< X-Rate-Limit-Duration: 1
< X-Rate-Limit-Limit: 10.00
< X-Rate-Limit-Request-Forwarded-For:
< X-Rate-Limit-Request-Remote-Addr: 172.21.0.1:50402
< Date: Sun, 30 Jun 2019 20:59:16 GMT
< Transfer-Encoding: chunked
<
event: last
data: [{"id":"be7cca68-9e5a-4e2c-acb4-72147ada4d1b","pid":"","text":"<p>9</p>\n","orig":"9","user":{"name":"safariman","id":"safariman","picture":"http://127.0.0.1:8080/api/v1/avatar/f125202a6e062f8c6ad38a54c99dc8ca56a37fdc.image","admin":false},"locator":{"site":"remark","url":"https://remark42.com/demo/"},"score":0,"vote":0,"time":"2019-06-30T15:59:15.7894815-05:00","title":"remark42 demo page"}]

event: last
data: [{"id":"45e2161f-8f77-4668-a3ba-6e4910df2fdc","pid":"","text":"<p>13</p>\n","orig":"13","user":{"name":"safariman","id":"safariman","picture":"http://127.0.0.1:8080/api/v1/avatar/f125202a6e062f8c6ad38a54c99dc8ca56a37fdc.image","admin":false},"locator":{"site":"remark","url":"https://remark42.com/demo/"},"score":0,"vote":0,"time":"2019-06-30T15:59:30.7327556-05:00","title":"remark42 demo page"}]

event: last
data: [{"id":"6b95aec1-a4a1-4b27-8f38-f24062d66cd8","pid":"","text":"<p>15</p>\n","orig":"15","user":{"name":"safariman","id":"safariman","picture":"http://127.0.0.1:8080/api/v1/avatar/f125202a6e062f8c6ad38a54c99dc8ca56a37fdc.image","admin":false},"locator":{"site":"remark","url":"https://remark42.com/demo/"},"score":0,"vote":0,"time":"2019-06-30T15:59:36.2513977-05:00","title":"remark42 demo page"}]

event: last
data: [{"id":"b060cd96-bb96-4f80-b0c3-846af1783fbc","pid":"","text":"<p>17</p>\n","orig":"17","user":{"name":"safariman","id":"safariman","picture":"http://127.0.0.1:8080/api/v1/avatar/f125202a6e062f8c6ad38a54c99dc8ca56a37fdc.image","admin":false},"locator":{"site":"remark","url":"https://remark42.com/demo/"},"score":0,"vote":0,"time":"2019-06-30T15:59:41.1290661-05:00","title":"remark42 demo page"}]

Is it intentional?

Also, I noticed, that connection can be unstable when we dealing with nginx or somekind proxy.

@umputun
Copy link
Owner Author

umputun commented Jun 30, 2019

connects too long

tried with --no-buffer?

skips comments

This is expected. Notifications are not trying to deliver a stream of updates but rather acts as notification of "change happened". The technical reason for skipped comment is here (request a single comment after sinceTime).

It is possible to change the logic if needed and get up to N comments instead of 1

upd: let me know if you want to increase the limit and include up to N comments in the event's data.

@Reeywhaar
Copy link
Collaborator

Yeah, --no-buffer really helps, now I know what could have been causing such issue with webpack-dev-server proxy, thanks!

It is possible to change the logic if needed and get up to N comments instead of 1

I don't think there is necessity, I just refetching last comments on event received, I don't even look at data it contains.

@umputun
Copy link
Owner Author

umputun commented Jun 30, 2019

regarding nginx - probably there are some magic parameters to make it play better with such streams. Maybe read timeout of some kind

@umputun umputun added this to the v1.4 milestone Jul 27, 2019
@umputun umputun mentioned this pull request Jul 27, 2019
@Ivanezko
Copy link
Contributor

I dont see this code in master branch.
As a result get 404 just like on the demo site https://demo.remark42.com/api/v1/stream/info?site=remark

@umputun
Copy link
Owner Author

umputun commented Mar 16, 2021

we never implemented proper ui for this and decided to drop the code

@Ivanezko
Copy link
Contributor

Ivanezko commented Mar 17, 2021

Pretty strange desicion.
I do not need ui for this at all.
I need to export the count of comments realtime to save to my db and sort the articles. Would you suggest to merge this code by myself into my fork?

@umputun
Copy link
Owner Author

umputun commented Mar 17, 2021

Pretty strange desicion.

The code was designed with a very particular goal in mind - to provide a stream of updates for dynamic UI changes. The UI part never happened and so that API was removed.

Would you suggest to merge this code by myself into my fork?

Up to you. To me it sounds like a bad idea, but if you are willing to maintain your own fork and backport/merge changes from the master - sure, fine. Pls note - another reason we have dropped unused streaming API was due to some rare, but painful issues we had with concurrent tests. It didn't cause races but sometimes results of those tests were incorrect (unexpected).

Maybe you can figure what was wrong with that code and provide PR with a fixed implementation. Another possible solution for your problem - call /api/v1/last every X seconds (millis). Won't be truly realtime but could be enough depends on the use case

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants