-
Notifications
You must be signed in to change notification settings - Fork 3
feat: implement Identify method #89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request has been linked to Shortcut Story #202711: Support Identify method. |
| auto identify_future = identify_promise.get_future(); | ||
| client_->AsyncIdentify(*maybe_ctx, [&]() { identify_promise.set_value(); }); | ||
| identify_future.wait(); | ||
| return nlohmann::json{}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This blocks the thread serving the request, but that's fine since the the contract tests execute serially (and correct, since the contract tests assume it will happen synchronously.)
|
|
||
| // The ASIO implementation assumes that the io_context will be run from a | ||
| // single thread, and applies several optimisations based on this assumption. | ||
| auto const kAsioConcurrencyHint = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are running it on a single thread, we should inform asio so it can remove a bunch of locking.
| } | ||
|
|
||
| void PollingDataSource::DoPoll() { | ||
| last_poll_start_ = std::chrono::system_clock::now(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is kinda a gross diff; what I've done is pull out the contents of the inline callback in requester_.Request and put it into a dedicated function. Then, I check the weak pointer before calling it.
| "Could not parse streaming endpoint URL."; | ||
|
|
||
| StreamingDataSource::StreamingDataSource( | ||
| Config const& config, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've pulled most of the contents of the constructor into the Start method. This is necessary so we can pass a weak pointer of the Datasource into the eventsource client's callbacks.
8104b0d to
e290a12
Compare
| std::chrono::system_clock::now(), | ||
| key, | ||
| context_, | ||
| ReadContextSynchronized([](Context const& c) { return c; }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conceptually this makes me a bit uncomfortable, but I think it is fine. Basically the relationship between the flags we get, the current context, and the event.
The client shouldn't be changing context all the time, so it probably isn't a real concern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking about this and not sure the best way to make it consistent. The flag data manager doesn't know about the context. We could potentially write lock a mutex for the duration of Identify and then read lock it in the Variation calls, but that's a long/indefinite lock.
Maybe something like a generation integer that we could compare? Or store the context in the manager, and compare against that? Just some thoughts.
This commit adds support for
IdentifyAsync, which causes the client to shut down its data source, switch contexts, and restart the data source.Since it returns a future, users can call
wait()to make it synchronous. An alternative API would be to accept a completion handler. That would imply executing user callbacks on a shared or separate executor, and probably isn't as convenient for the common case of blocking.To enable safe shutdown of the Eventsource/Datasources, I had to refactor them to use shared ownership.
This is because the eventsource client (or polling requester) has completion handlers queued within asio that have pointers back to the objects.
If we've shutdown and destroyed those objects, then the handlers will cause undefined behavior when they execute and access the destroyed object.
Now, we pass in
weak_ptrof the Datasource into the Eventsource client's callbacks, and lock the weak pointer before accessing any members (such as the logger). So if a completion handler finally executes long after the object is destroyed, it should be harmless.