Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Seconding @ontofractal 's point. Cloudflare will not cache POST requests, they only cache based on the URL. An option to process GET requests would be helpful. Apollo Client, for example, has the option useGETForQueries, for the very reason that setting up caching is easier with GET requests.
I was about to setup Hasura in production when I noticed this is not currently possible. If anyone knows a workaround, I'd love to hear it. Thanks!
Apollo server has Automatic persisted queries, it will change POST to GET at client, and mutation is still POST.
Once GET method is supported, ETag header can be implemented: #2792
Supporting ETag header will avoid to transfer data if it matches current browser cache version.
When application starts, initial state data is loaded.
If you refresh the page, or if the data has not changed since your last visit, the application is loaded immediately, without waiting for the network transfer.
Very useful for static data (or data which changes rarely), or when data is usually modified by you (your profile, your last messages sent...).
I figured out how to convert GET requests to POST in order to cache idempotent GraphQL queries via CDN!
I'm writing this out because I've received a lot of help from Hasura devs and want to give something back. There may be a better way. This is what I did,
Send a GET to /graphql-get/ which proxies to a customized openresty version of nginx. Openresty converts GET to POST and then proxies that back to hasura's /v1/graphql/. Then setup Cloudflare as normal for any API.
Using Hasura's Digital Ocean droplet that comes with Caddy, I modified the Caddyfile:
Then, install openresty. It will fail to start on its own because it uses 8080 by default which is already used by Hasura's Caddy installation.
Move the default configuration and create a new one,
Note above I am proxying to http://... not https. This is fine for me but you may want to look into it if that concerns you. Configuring ssl on nginx to use Caddy's ssl certificates seemed unnecessary. At first when I did this, requests were received by Hasura as GET requests, despite having been rewritten as POST. That confused me until I found this issue that describes how to configure Caddy to disable the auto-redirect from http to https.
Note that Caddy v2 is coming soon, and after that, support may be added so that the proxy via openresty/nginx is not necessary. I discussed that with Caddy developers in this thread.
Then, point DNS to Cloudflare and setup a page rule that looks like this. Cache Level "Cache Everything" enables caching JSON and setting "Edge Cache TTL" tells Cloudflare how long to keep a result before requesting again. By default, Browser Cache lasts 4 hours, and you can set it as low as 30 minutes.
The above sends minimal headers which somehow tells the browser not to send an OPTIONS (preflight) request.
By the way, I first tried to convert GET requests to POST via a Cloudflare worker since they have a template for just this task. That was not working for me, perhaps due to not setting "Cache Level->Cache Everything" and "Edge Cache TTL" in the Cloudflare page rule, which I later discovered was necessary. I may try this again later to satisfy my curiosity. Anyway, workers have some limits and you can avoid these by implementing it yourself.
That's it! Now watch Hasura unveil support for GET requests and make all this pointless =)