New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multipart file uploads through remote schema #2419
Comments
I have the same issue |
yeah, you can't upload files through remote schema at this moment. There are some workaround:
|
As a workaround I'm base64 encoding files and sending them as strings to hasura which then forwards the request to a remote schema that decodes the string. This does results in a 33% payload size increase which is not ideal but works otherwise. |
Any updates on the ability to upload files through Hasura? |
I'd love to see a fix on this too. Just gave it a try but the upload fails with error:
|
Are there any updates on this ? |
Running into this issue as well, any progress about the multipart file uploads through remote schema in Hasura? |
Got the same issue. Any updates on this? |
If you are uploading files to AWS S3(or Azure/GCP), there is a simple way that you don't have to launch another server to process file upload or create a backend handler for hasura action. Basically, when you upload files to S3, it's better to get signed urls from backend and upload to s3 directly. BTW, for multiple image sizes hosting, this approach is easy and painless. The critical point is how to get s3 signed url to upload. const AWS = require("aws-sdk");
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const signedUrl = s3.getSignedUrl("putObject", {
Bucket: "my-bucket",
Key: "path/to/file.jpg",
Expires: 600,
});
console.log("signedUrl", signedUrl); A signedUrl example is like You can see that the most important part is After digging into AWS JS SDK, we can find signature is computed here. return util.crypto.lib.createHmac(fn, key).update(string).digest(digest);
fn = 'sha1'
string = 'PUT\n\n\n1621135558\b/my-bucket/path/to/file.jpg'
digest = 'base64' It's just sha1 a certain format of string. This means we can just use hasura computed fields and Postgres crypto function to achieve the same results. So if you have a table "files" CREATE TABLE files (
id SERIAL,
created_at timestamps,
filename text,
user_id integer
); you can create a SQL function CREATE OR REPLACE FUNCTION public.file_signature(file_row files)
RETURNS text
LANGUAGE sql
STABLE
AS $function$
SELECT ENCODE( HMAC(
'PUT' ||E'\n'||E'\n'||E'\n'||
(cast(extract(epoch from file_row.created_at) as integer) + 600)
||E'\n'|| '/my-bucket/' || file_row.filename
, 'AWS_SECRET', 'SHA1'), 'BASE64')
$function$ Finally, follow this to expose this computed field to Hasura, and also add permission rules to make only the owner user access the "signature" field This way allows you to not add any backend stuff and handle permission all (with user_id field in files table) in Hasura. |
I'm also interested in doing this. Is it on the roadmap or has there been any activity on this in the past two years? |
i have the same issue |
Same issue. God, this is really a no-brainer.. |
Sounds awesome but I have no idea how hard it is to implement. Maybe graphql doesn't support binary. Either way, I'm grateful for all that hasura is an offers. It's way better than doing everything by hand. |
Just support the multipart/form-data spec (like everyone else does) and its a done deal. @vaishnavigvs @shahidhk can we please have an update on this? It should really not be hard to implement (support above mentioned multipart-spec) and would help a lot of users a ton. |
Hasura Storage might be useful for some developers seeing this issue. It uses S3 to store files and use Hasura to store file metadata and manage permissions. |
I'm using it in development and will soon put it into production. Very good. |
Any decision or progress on this? |
Thank you everyone for the request and comments for this feature. We would like to inform you that this is on our roadmap but we do not have a timeline at present. Meanwhile, you can use the nhost based workaround which @elitan has described above if it meets your use-case. Please continue to follow this Github issue. We plan to publish on this issue a detailed RFC that covers all use cases and limitations of the feature. We welcome more detailed feedback from you once we provide those details. For now it would be useful to understand more about your application/API design that warrants a multipart file upload and the need for it happen via the unified Hasura graphQL endpoint. |
Any chance to get this done with a REST endpoint? |
It is really silly that this isn't supported... I understand that it's kind of a bear to deal with, but file uploads are a core requirement for all but the most simple of applications. Hasura doesn't realize it's vision for being the locus-point of an infrastructure if it doesn't properly proxy To shed some light, RE: @rahulagarwal13's question:
My application has the concept of I get that Hasura's core value-prop is generating the CRUD queries/mutations, however it's kind of ridiculous that the nhost guys had to write a whole separate microservice that sits in front of Hasura to facilitate this, when all that should be required is a sidecar that implements the business logic behind hasura. |
Returning here to drop my learnings for future people running into this. Honestly, despite my insistence of the contrary, it's kind of an inflexible pattern to do proxied file uploads to a key-value store like S3. The work-around (which tbh should have been my first thought/approach) is to leverage S3 pre-signed URLs in a sidecar (via remote schemas). Here's a nice blog post about the functionality: https://fourtheorem.com/the-illustrated-guide-to-s3-pre-signed-urls/ The TLDR is that your sidecar can expose The end-result is Hasura can be relied on for auth/ACL for access to the mutations and your sidecar can do more deep/comprehensive edge-case checking directly (ex. check if the file exists before responding with a download URL). I honestly wish Hasura shilled this pattern more in their docs as it would probably have prevented a lot of wheel-spinning for me and others searching for a solution here. |
@yourbuddyconner you're basically also describing my experience: From being angry that hasura doesn't support uploads to actually liking it. For S3 uploads, the presigned-url remote-schema pattern is just so much more efficient. For both clients and your server. Still, this is only valid for S3 and equivalents. There are use-cases where you don't want to use s3 or other 3rd party blob-providers but rather consume the binary stream directly on your server. So I still think that hasura could/should integrate multi-part/form requests, although not with a high priority. |
I took the base64 encode route for now, hopefully there will be an alternative in the future (usable in actions), even if it not in the graphql spec. |
Hi,
I am having an issue to upload a file via Hasura using the custom resolvers over remote schema as i want to upload a file on the server folder directly and not any third party.
The file upload is working perfectly fine if use the "graphql-yoga" server that is exposed as "remote schema" and even the mutation is added to the Hasura but it give the following error:
Here is how the schema is looking on the Hasura console:
Here is the code for 'app.js' that is entry point of the project:
Here is there server.js file that is referenced in the above app.js:
Here is the schema that in am using for this:
Here is the Mutation.js:
Please let me know if anything else is required.
The text was updated successfully, but these errors were encountered: