Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ed/29 may 17 #12

Merged
merged 24 commits into from
Jul 24, 2017
Merged

Ed/29 may 17 #12

merged 24 commits into from
Jul 24, 2017

Conversation

eellson
Copy link
Contributor

@eellson eellson commented May 30, 2017

https://www.dropbox.com/s/rvyc1vmurfnykn1/githubprofiler480.mov?dl=0

My solution here is kind of involved, so hear me out:

The TLDR is:

  • Elm application which renders search field
  • Phoenix application which sets up a websocket connection for Elm to send/listen on.

The flow of data through the app looks roughly like so:

  • User loads page
  • Elm renders search field, joins and listens on "typeahead:public" websocket channel
  • User types into field
  • Elm updates model with latest query on any changes to field, sending query across websocket to server.
  • Phoenix receives every value the input is set to, but we don't want to send all of these to Github (we will quickly use up our allowance). We use GenStage to handle sending these less frequently:
    • When a client connect to the websocket, we setup GenStage linked to their connection.
    • QueryProducer is updated to hold the latest query whenever we receive one through the socket
    • QueryProducerConsumer then asks the consumer for the query, with configurable max_demand and interval. We use this to make sure we only ask for the query once per second.
    • QueryConsumer grabs these queries as they come through, calling out to github, and sending the response back over the socket.
  • We've used the Github graphql api for this, which is cool because we get a higher effective rate limit for this query, and specify only the fields we want returned.
  • When Elm receives results across the socket, we decode them, update the model with our results, and this triggers the view to re-render.

Perhaps I cheated with the rendering of the user, but all the info you wanted is displayed in the typeahead anyway ;)

Good luck finding a cooler stack than this in the near future:

  • Elm
  • Elixir
  • Websockets
  • GenStage
  • GraphQL

eellson added 24 commits May 28, 2017 19:02
I'm using phoenix 1.3-rc2 for this
`cd assets && npm install elm elm-brunch --save`
Hooks Elm into brunch

I've chosen to keep my .elm files in assets/elm/src, and compile to
assets/vendor. Often people keep their .elm files in web/elm, but I feel
that dir is for Elixir really.

Toyed with compiling Elm to priv/static as we do with other js, but
couldn't get that working.
This hooks up a basic model, update and view for a search field.
Currently this just render the input text into a p tag, as a proof of
concept.
This adds some basic plumbing for the websocket connection.
This joins the websocket on `init`, sends to it when typing, and renders
recieved body to the screen. Currently we're basicall just echoing.

Likely could clean this some with some more advanced types.
This adds a rough skeleton for GenStage to be used with our typeahead
search function.

This is currently split into 3 stages:

\### `QueryProducer`

`QueryProducer` is unusual for a producer, as the state it holds is just 1
word, not many. This is because I only care about the most recent
version of whatever has been input by the user, and so only store this.

Unsure how this will behave with latency atm.

\### `QueryProducerConsumer`

`QueryProducerConsumer` takes care of grabbing the state from the
producer, at a configurable rate. I've taken this implementation from an
example in the Elixir docs, and think it might be more complex than I
need.

\### `QueryConsumer`

This simply prints the latest event to the screen.
This updates the GenStage implementation to be aware of sockets, meaning
we can update the producer for a given channel, and push from the
consumer to the channel it knows about.

This is good in that it means we could have more than one user on this
at once, and they'd only see their own messages. However, the rate limit
will now be exceeded if the limit is per-application, rather than
per-socket, which seems likely.
Rough working implementation rendering users found based on query 🤠
This slipped through my net. Token revoked
This is quite a large refactor, but should make things work better going
forwards. Should be functional at this commit, but RateLimiter needs
updating to send demand every second.
This switches to using the internal buffers in `Query`. This handles
buffering somewhat better than my manual implementation, as for
instance:

* if we receive events but have not enough demand, we enqueue them. We
set this buffer size to 1, so that we never enqueue more than 1 event,
which makes sense for our purposes (no point rendering stale search
results).
* if receive demand but no events, the next events to come through are
dispatched immediately, and so the search can happen straight away
instead of at the next demand.

This is much better than what we had before. The main issue I see with
it is that as far as I can see there is no way to set the max pending
demand buffer, so this would seem to be unbounded. I'd rather limit this
to say 100 (could likely handle this somehow in the producer_consumer.
@simonfl3tcher simonfl3tcher merged commit 9bcb700 into master Jul 24, 2017
@simonfl3tcher simonfl3tcher deleted the ed/29-may-17 branch July 24, 2017 10:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants