-
Notifications
You must be signed in to change notification settings - Fork 963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to query any subgraph present in the store #640
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love it! Just got a couple of comments. Plus, I'd like us to close WS connections if we can. Could also push that out into a separate PR if it's non-trivial.
@@ -6,16 +6,19 @@ use diesel::sql_types::Text; | |||
use diesel::{delete, insert_into, select, update}; | |||
use filter::store_filter; | |||
use futures::sync::mpsc::{channel, Sender}; | |||
use lru_time_cache::LruCache; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking last night: I wonder what cache Leo will pick for this, LRU would make sense. And here we go 😍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Design by thought transmission!
Some things to be noted here: A built-in cache is of course the fastest possible, but the cache size will not really scale with the number of query nodes. Since queries have no affinity to particular query nodes (and we probably want to avoid that), then it's likely that all query nodes will have very similar contents in their caches. In fact, having more query nodes means it will take more initial queries to start getting consistent cache hits across all nodes.
Having it still better than not having it, but we should be aware of the limitations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. I think we can live with that.
896c9da
to
e560be0
Compare
@Jannis All comments addressed. |
nice, it's pretty great that we don't need the schema events anymore! |
@timmclean Adressed your comments. |
@Jannis @timmclean Ready for re-review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The added 404 response is great.
An LRU cache with capacity of 100.
423dad2
to
312e433
Compare
Rebased, needs to be re-approved. |
Resolves #534. The core logic is in the new store method
schema_of
which fetches schemas by id, with a cache layer to improve performance. This turns out to simplify a lot of code. We're also careful to check in the store if the subgraph is deployed before querying or subscribing to it with the new methodis_deployed
.In
server/http/src/service.rs
we were doing some validations when serving graphiql that were redundant (and a bit inconsistent) with those done when receiving queries. So now we unconditionally serve graphiql.In
server/websocket
, we had logic to cancel outstanding subscriptions when a subgraph was removed. However theSchemaEvent::Removed
that triggered that no longer exist. I chose to remove the logic entirely, since in light of the changes done in this PR we'd need to implement it differently. It was certainly not sufficient to prevent the server from being overloaded with subscriptions anyways, so we're likely to revisit that soon.