Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Signed urls for upload #81

Closed
etiennedupont opened this issue Nov 19, 2021 · 16 comments · Fixed by #282
Closed

Signed urls for upload #81

etiennedupont opened this issue Nov 19, 2021 · 16 comments · Fixed by #282
Labels
accepted Accepted for further investigation and prioritisation enhancement New feature or request

Comments

@etiennedupont
Copy link

Feature request

Is your feature request related to a problem? Please describe.

At Labelflow, we developed a tool to upload images on our Supabase storage, based on one nextJs API route. The goal is for us to abstract the storage method from the client-side by querying a generic upload route to upload any file and to ease the permission management. Indeed, in the server-side function, one service role Supabase client is manipulated to actually make the upload. We use next-auth to secure the route (and to manage authentication in the app in general).

Client-side upload looks like that:

await fetch("https://labelflow.ai/api/upload/[key-in-supabase]", {
                  method: "PUT",
                  body: file,
                });

Server-side API route looks more or less like that (I don't show the permission management part):

import { createClient } from "@supabase/supabase-js";
import nextConnect from "next-connect";

const apiRoute = nextConnect({});
const client = createClient(
  process?.env?.SUPABASE_API_URL as string,
  process?.env?.SUPABASE_API_KEY as string
);
const bucket = "labelflow-images";

apiRoute.put(async (req, res) => {
  const key = (req.query.id as string[]).join("/");
  const { file } = req;
  const { error } = await client.storage.from(bucket).upload(key, file.buffer, {
    contentType: file.mimetype,
    upsert: false,
    cacheControl: "public, max-age=31536000, immutable",
  });
  if (error) return res .status(404);
  return res.status(200);
});

export default apiRoute;

The problem is that we face a serious limitation in terms of upload size since we use Vercel for deployment which doesn't allow serverless functions to handle requests that are more than 5Mb. Since we send over images in the upload request from the client to the server, we're likely to reach that limit quite often.

Describe the solution you'd like

As we don't want to manipulate Supabase clients client-side, we think that the ideal solution would be to allow us to upload directly to Supabase, using an upload signed URL. The above upload route could now take only a key as an input and return a signed URL to make the upload to.

Client-side upload would now be in two steps:

// Get Supabase signed Url
const { signedUrl } = await (await fetch("https://labelflow.ai/api/upload/[key-in-supabase]", {
                  method: "GET",
                })).json();

// Upload the file
await fetch(signedUrl, {
                  method: "PUT",
                  body: file,
                });

And our API route would look like that, more or less:

import { createClient } from "@supabase/supabase-js";
import nextConnect from "next-connect";

const apiRoute = nextConnect({});
const client = createClient(
  process?.env?.SUPABASE_API_URL as string,
  process?.env?.SUPABASE_API_KEY as string
);
const bucket = "labelflow-images";

apiRoute.get(async (req, res) => {
  const key = (req.query.id as string[]).join("/");
  const { signedURL } = await client.storage
    .from(bucket)
    .createUploadSignedUrl(key, 3600); // <= this is the missing feature

  if (signedURL) {
    res.setHeader("Content-Type", "application/json");
    return res.status(200).json({signedURL});
  }

  return res.status(404);
});

export default apiRoute;

Describe alternatives you've considered

I described them in our related issue:

Additional context

We're happy to work on developing this feature at Labelflow if you think this is the best option!

@riderx
Copy link

riderx commented Apr 25, 2022

I have the same issue for https://capgo.app i allow users to upload from my CLI with a apikey, so not logged in in the CLI.
my current solution is to split the file in chuck of 1mb to upload in loop and edit the file in storage but it often fail for big files: Cap-go/CLI#12

@fenos fenos added the accepted Accepted for further investigation and prioritisation label Aug 25, 2022
@fenos
Copy link
Contributor

fenos commented Aug 25, 2022

Hello!
Apologies for the late reply,

I really like the idea of a signed URL for upload, I will add this to the backlog for discovery & prioritization

@riderx
Copy link

riderx commented Aug 26, 2022

@fenos thanks for that, for me, i don't need anymore the feature since.

I was able to do APIKEY check with RLS.

If you want to do it too:

First create key_mode, the type of api key:

CREATE TYPE "public"."key_mode" AS ENUM (
    'read',
    'write',
    'all',
    'upload'
);

Then create the table:

CREATE TABLE "public"."apikeys" (
    "id" bigint NOT NULL,
    "created_at" timestamp with time zone DEFAULT "now"(),
    "user_id" "uuid" NOT NULL,
    "key" character varying NOT NULL,
    "mode" "public"."key_mode" NOT NULL,
    "updated_at" timestamp with time zone DEFAULT "now"()
);

Then create the postgress function:

CREATE OR REPLACE FUNCTION public.is_allowed_apikey(apikey text, keymode key_mode[])
 RETURNS boolean
 LANGUAGE plpgsql
 SECURITY DEFINER
AS $function$
Begin
  RETURN (SELECT EXISTS (SELECT 1
  FROM apikeys
  WHERE key=apikey
  AND mode=ANY(keymode)));
End;  
$function$

Then add the RLS in table you want to give access:

is_allowed_apikey(((current_setting('request.headers'::text, true))::json ->> 'apikey'::text), '{all,write}'::key_mode[])

And in the SDK 1 you can add your APIKEY like that

const supabase = createClient(hostSupa, supaAnon, {
    headers: {
        apikey: apikey,
    }
})

In SDK v2

const supabase = createClient(hostSupa, supaAnon, {
    global: {
      headers: {
          apikey: apikey,
      }
  }
})

@kfields
Copy link

kfields commented Aug 28, 2022

That would be very much appreciated. Thank you.

@n-glaz
Copy link

n-glaz commented Sep 7, 2022

+1 for this, signed upload URLs would solve a lot of my own implementation issues around using Supabase storage with NextJS

@th-m
Copy link

th-m commented Sep 29, 2022

➕ 💯
This would great

@chitalian
Copy link

+1 would really like this

@riderx
Copy link

riderx commented Nov 4, 2022

i updated my comment for people who wanted the apikey system as me

@413n
Copy link

413n commented Nov 15, 2022

+1

@c3z
Copy link

c3z commented Dec 19, 2022

+1

1 similar comment
@huntedman
Copy link

+1

@yoont4
Copy link

yoont4 commented Jan 18, 2023

Is this still prioritized? The DB is setup in a way where we can still use middleware to handle the auth, but that is not the case for storage uploading. If we aren't able to create a signed URL, we have to use RLS to control the upload authorization which doesn't work in all of our cases. This would be extremely useful in allowing us to have some access-control live in middleware for file uploads.

@ccssmnn
Copy link

ccssmnn commented Mar 3, 2023

I'm also interested in this feature. I would love to create presigned URLs for uploads to save bandwidth and avoid file size limitations, while using our own server for most of the business logic. It looks like @etiennedupont has fixed their issue by using S3 directly, unfortunately.

@c3z
Copy link

c3z commented Mar 3, 2023

I can share my solution, where I deployed proxy server using fly.io to circumvent that issue
Hovever not ideal
I;m still waiting also for this feat

@fenos fenos closed this as completed in #282 Mar 6, 2023
@Eerkz
Copy link

Eerkz commented Aug 7, 2023

@fenos thanks for that, for me, i don't need anymore the feature since.

I was able to do APIKEY check with RLS.

If you want to do it too:

First create key_mode, the type of api key:

CREATE TYPE "public"."key_mode" AS ENUM (
    'read',
    'write',
    'all',
    'upload'
);

Then create the table:

CREATE TABLE "public"."apikeys" (
    "id" bigint NOT NULL,
    "created_at" timestamp with time zone DEFAULT "now"(),
    "user_id" "uuid" NOT NULL,
    "key" character varying NOT NULL,
    "mode" "public"."key_mode" NOT NULL,
    "updated_at" timestamp with time zone DEFAULT "now"()
);

Then create the postgress function:

CREATE OR REPLACE FUNCTION public.is_allowed_apikey(apikey text, keymode key_mode[])
 RETURNS boolean
 LANGUAGE plpgsql
 SECURITY DEFINER
AS $function$
Begin
  RETURN (SELECT EXISTS (SELECT 1
  FROM apikeys
  WHERE key=apikey
  AND mode=ANY(keymode)));
End;  
$function$

Then add the RLS in table you want to give access:

is_allowed_apikey(((current_setting('request.headers'::text, true))::json ->> 'apikey'::text), '{all,write}'::key_mode[])

And in the SDK 1 you can add your APIKEY like that

const supabase = createClient(hostSupa, supaAnon, {
    headers: {
        apikey: apikey,
    }
})

In SDK v2

const supabase = createClient(hostSupa, supaAnon, {
    global: {
      headers: {
          apikey: apikey,
      }
  }
})

Anyone else having trouble with the custom headers? Tried logging the request headers and my custom headers are never attached.

@softmarshmallow
Copy link

softmarshmallow commented Jun 14, 2024

why does createSignedUploadUrl does not have upsert option?
and why does uploadToSignedUrl have a upsert option?

how would I be able to create a singed url for client upload for updating existing fikes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Accepted for further investigation and prioritisation enhancement New feature or request
Projects
None yet