Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multipart file uploads through remote schema #2419

Open
AkshayCloudAnalogy opened this issue Jun 22, 2019 · 23 comments
Open

Support multipart file uploads through remote schema #2419

AkshayCloudAnalogy opened this issue Jun 22, 2019 · 23 comments
Assignees
Labels
a/api/graphql c/server Related to server k/enhancement New feature or improve an existing feature k/ideas Discuss new ideas / pre-proposals / roadmap t/gql-services

Comments

@AkshayCloudAnalogy
Copy link

Hi,
I am having an issue to upload a file via Hasura using the custom resolvers over remote schema as i want to upload a file on the server folder directly and not any third party.

The file upload is working perfectly fine if use the "graphql-yoga" server that is exposed as "remote schema" and even the mutation is added to the Hasura but it give the following error:

{"errors":[{"extensions":{"path":"$","code":"invalid-json"},"message":"invalid json"}]}

Here is how the schema is looking on the Hasura console:

singleUpload(file: Upload!): File!

Here is the code for 'app.js' that is entry point of the project:

const server = require("./server");
const path = require("path");
const bodyParser = require("body-parser");

server.start(
  {
    port: process.env.PORT || 4000,
    playground: "/graphql",
    endpoint: "/graphql"
  },
  () => {
    console.log(`The server is running on port: ${server.options.port}`);
  }
);

Here is there server.js file that is referenced in the above app.js:

const GraphQLServer = require("graphql-yoga").GraphQLServer;
const Mutation = require("./src/resolvers/Mutation");

const resolvers = {
  Mutation
};

const server = new GraphQLServer({
  cors: false,
  typeDefs: "./src/schema.graphql",
  resolvers,
  context(request) {
    return {
      request
    };
  }
});
module.exports = server;

Here is the schema that in am using for this:

scalar Upload
type Mutation {
    singleUpload(file: Upload!): File!
}

Here is the Mutation.js:

const fs = require("fs");
const createWriteStream = fs.createWriteStream;

const storeUpload = async ({ stream, filename, request }) => {
  const id = request.userId; // This will just give the current user id
  const path = `files/${id}/${filename}`;
  const dirPath = `files/${id}`;

  fs.mkdir(dirPath, { recursive: true }, err => {
    if (err) {
      throw new Error("File Upload Error.");
    }
  });
  return new Promise((resolve, reject) =>
    stream
      .pipe(createWriteStream(path))
      .on("finish", () => resolve({ id, path }))
      .on("error", reject)
  );
};

const processUpload = async (upload, request) => {
  const { stream, filename, mimetype, encoding } = await upload;
  const { id, path } = await storeUpload({ stream, filename, request });
  return { id, filename, mimetype, encoding, path };
};

module.exports = {
  singleUpload: (obj, { file }, { request }) => {
    return processUpload(file, request);
  }
};

Please let me know if anything else is required.

@rikinsk-zz rikinsk-zz changed the title File Upload from Hasura using custom resolvers is not working!!! File Upload from Hasura using custom resolvers is not working Jun 26, 2019
@shahidhk shahidhk changed the title File Upload from Hasura using custom resolvers is not working Support multipart file uploads through remote schema Jun 27, 2019
@shahidhk shahidhk added c/server Related to server k/ideas Discuss new ideas / pre-proposals / roadmap labels Jun 27, 2019
@marionschleifer marionschleifer assigned dsandip and unassigned dsandip Jul 8, 2019
@rubenabix
Copy link

I have the same issue

@bkstorm
Copy link

bkstorm commented Jul 19, 2019

yeah, you can't upload files through remote schema at this moment. There are some workaround:

  • Upload files directly to cloud storages and get url back.
  • Using schema stitching (which is deprecated by Apollo)

@Fanarito
Copy link

Fanarito commented Nov 18, 2019

As a workaround I'm base64 encoding files and sending them as strings to hasura which then forwards the request to a remote schema that decodes the string. This does results in a 33% payload size increase which is not ideal but works otherwise.

@sbussard
Copy link

Any updates on the ability to upload files through Hasura?
Base64 is not an option for one of my use cases

@dohomi
Copy link

dohomi commented Jun 29, 2020

I'd love to see a fix on this too. Just gave it a try but the upload fails with error:

Uncaught (in promise) Error: Cannot read property 'map' of undefined

@vshelke
Copy link

vshelke commented Dec 17, 2020

Are there any updates on this ?

@revelaustris
Copy link

Running into this issue as well, any progress about the multipart file uploads through remote schema in Hasura?

@RomansBermans
Copy link

Got the same issue. Any updates on this?

@testuser887
Copy link

testuser887 commented May 16, 2021

If you are uploading files to AWS S3(or Azure/GCP), there is a simple way that you don't have to launch another server to process file upload or create a backend handler for hasura action.

Basically, when you upload files to S3, it's better to get signed urls from backend and upload to s3 directly. BTW, for multiple image sizes hosting, this approach is easy and painless.

The critical point is how to get s3 signed url to upload.
In node.js, you can do

const AWS = require("aws-sdk");
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const signedUrl = s3.getSignedUrl("putObject", {
  Bucket: "my-bucket",
  Key: "path/to/file.jpg",
  Expires: 600,
});
console.log("signedUrl", signedUrl);

A signedUrl example is like https://my-bucket.s3.amazonaws.com/path/to/file.jpg?AWSAccessKeyId=AKISE362FGWH263SG&Expires=1621134177&Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D.
Normally, you will put the above code to a handler hosted in AWS Lambda or glitch consumed by Hasura action, and add some logic for authorization and even insert a row to table.

You can see that the most important part is Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D. How can we make it easier to get Signature?

After digging into AWS JS SDK, we can find signature is computed here.

return util.crypto.lib.createHmac(fn, key).update(string).digest(digest);

fn = 'sha1'
string = 'PUT\n\n\n1621135558\b/my-bucket/path/to/file.jpg'
digest = 'base64'

It's just sha1 a certain format of string. This means we can just use hasura computed fields and Postgres crypto function to achieve the same results.

So if you have a table "files"

CREATE TABLE files (
   id SERIAL,
   created_at timestamps,
   filename text,
   user_id integer
);

you can create a SQL function

CREATE OR REPLACE FUNCTION public.file_signature(file_row files)
 RETURNS text
 LANGUAGE sql
 STABLE
AS $function$
  SELECT ENCODE( HMAC(
  
  'PUT' ||E'\n'||E'\n'||E'\n'|| 
  (cast(extract(epoch from file_row.created_at) as integer) + 600)
  ||E'\n'|| '/my-bucket/' || file_row.filename
  
  , 'AWS_SECRET', 'SHA1'), 'BASE64')
$function$

Finally, follow this to expose this computed field to Hasura, and also add permission rules to make only the owner user access the "signature" field

This way allows you to not add any backend stuff and handle permission all (with user_id field in files table) in Hasura.

@nsbruce
Copy link

nsbruce commented Jan 10, 2022

I'm also interested in doing this. Is it on the roadmap or has there been any activity on this in the past two years?

@mahmoudfathy2020
Copy link

i have the same issue

@maaft
Copy link

maaft commented Jul 7, 2022

Same issue. God, this is really a no-brainer..

@sbussard
Copy link

sbussard commented Jul 7, 2022

Sounds awesome but I have no idea how hard it is to implement. Maybe graphql doesn't support binary. Either way, I'm grateful for all that hasura is an offers. It's way better than doing everything by hand.

@maaft
Copy link

maaft commented Jul 13, 2022

Maybe graphql doesn't support binary.

Just support the multipart/form-data spec (like everyone else does) and its a done deal.

@vaishnavigvs @shahidhk can we please have an update on this? It should really not be hard to implement (support above mentioned multipart-spec) and would help a lot of users a ton.

@elitan
Copy link
Contributor

elitan commented Aug 12, 2022

Hasura Storage might be useful for some developers seeing this issue. It uses S3 to store files and use Hasura to store file metadata and manage permissions.

@luizjunior05
Copy link

Hasura Storage might be useful for some developers seeing this issue. It uses S3 to store files and use Hasura to store file metadata and manage permissions.

I'm using it in development and will soon put it into production. Very good.

@rahulagarwal13 rahulagarwal13 added the k/enhancement New feature or improve an existing feature label Oct 8, 2022
@plmercereau
Copy link
Contributor

Any decision or progress on this?
Without it there's no way of funneling a file upload API to the Hasura graphql endpoint.
Very impatient 🙂

@rahulagarwal13
Copy link
Contributor

Thank you everyone for the request and comments for this feature. We would like to inform you that this is on our roadmap but we do not have a timeline at present. Meanwhile, you can use the nhost based workaround which @elitan has described above if it meets your use-case.

Please continue to follow this Github issue. We plan to publish on this issue a detailed RFC that covers all use cases and limitations of the feature. We welcome more detailed feedback from you once we provide those details.

For now it would be useful to understand more about your application/API design that warrants a multipart file upload and the need for it happen via the unified Hasura graphQL endpoint.

@Zerebokep
Copy link

Any chance to get this done with a REST endpoint?

@yourbuddyconner
Copy link

yourbuddyconner commented Apr 24, 2023

It is really silly that this isn't supported...

I understand that it's kind of a bear to deal with, but file uploads are a core requirement for all but the most simple of applications. Hasura doesn't realize it's vision for being the locus-point of an infrastructure if it doesn't properly proxy multipart/form-data requests to a remote schema at the very least.

To shed some light, RE: @rahulagarwal13's question:

it would be useful to understand more about your application/API design that warrants a multipart file upload and the need for it happen via the unified Hasura graphQL endpoint.

My application has the concept of Attachments which are models in the DB and users have specific permissions to access them (based on their JWT claims). A user may upload Attachments, and this logic is handled by a sidecar server (as Hasura doesn't support arbitrary business logic). I farm out to Auth0 for the roles and permissions, and am relying on Hasura as a unified GraphQL interface. In a normal system where the graphql endpoint is offered via Express (or similar) the client would not have to indirectly upload files, they would just POST them directly to the server.

I get that Hasura's core value-prop is generating the CRUD queries/mutations, however it's kind of ridiculous that the nhost guys had to write a whole separate microservice that sits in front of Hasura to facilitate this, when all that should be required is a sidecar that implements the business logic behind hasura.

@yourbuddyconner
Copy link

Returning here to drop my learnings for future people running into this.

Honestly, despite my insistence of the contrary, it's kind of an inflexible pattern to do proxied file uploads to a key-value store like S3.

The work-around (which tbh should have been my first thought/approach) is to leverage S3 pre-signed URLs in a sidecar (via remote schemas).

Here's a nice blog post about the functionality: https://fourtheorem.com/the-illustrated-guide-to-s3-pre-signed-urls/
And some examples: https://github.com/lmammino/s3-presigned-urls-examples

The TLDR is that your sidecar can expose prepareUpload and prepareDownload mutations which allow the client to fetch a PUT and GET url which authorizes it to communicate directly with S3. For most use-cases this is going to actually be more efficient as it doesn't impose a bottleneck on the server due to direct Client <--> S3 communication.

The end-result is Hasura can be relied on for auth/ACL for access to the mutations and your sidecar can do more deep/comprehensive edge-case checking directly (ex. check if the file exists before responding with a download URL).

I honestly wish Hasura shilled this pattern more in their docs as it would probably have prevented a lot of wheel-spinning for me and others searching for a solution here.

@maaft
Copy link

maaft commented Apr 26, 2023

@yourbuddyconner you're basically also describing my experience: From being angry that hasura doesn't support uploads to actually liking it.

For S3 uploads, the presigned-url remote-schema pattern is just so much more efficient. For both clients and your server.

Still, this is only valid for S3 and equivalents. There are use-cases where you don't want to use s3 or other 3rd party blob-providers but rather consume the binary stream directly on your server. So I still think that hasura could/should integrate multi-part/form requests, although not with a high priority.

@Zerebokep
Copy link

I took the base64 encode route for now, hopefully there will be an alternative in the future (usable in actions), even if it not in the graphql spec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
a/api/graphql c/server Related to server k/enhancement New feature or improve an existing feature k/ideas Discuss new ideas / pre-proposals / roadmap t/gql-services
Projects
None yet
Development

No branches or pull requests