Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Endpoint authorization with API key #32

Closed
Bunoviske opened this issue Jul 8, 2022 · 4 comments
Closed

Endpoint authorization with API key #32

Bunoviske opened this issue Jul 8, 2022 · 4 comments

Comments

@Bunoviske
Copy link

Bunoviske commented Jul 8, 2022

Hello,

I have a multi-tenant application and I would like to control who have access to each endpoint with API Keys. That is still a bit unclear for me. How can I authorize users before they consume some endpoint?

This question also extends to serving engines in general, like TorchServe. How people are normally controlling access to the inference APIs?

Thanks,
Bruno

@jkhenning
Copy link
Member

Hi @Bunoviske ,

Common practice is to protect these endpoints using some service that supports basic authentication (or JWT token authentication), placed before the endpoints.
Which multi-tenant application do you have? How do you generate credentials/tokens for this app?

@Bunoviske
Copy link
Author

Hello @jkhenning,

In my application, every tenant has an endpoint and every endpoint has a different authentication token. For that, my own application generate tokens for each endpoint, when they are deployed. How to implement this with clearml-serving, any ideas?

Besides that, can you elaborate more on this common practice you wrote? Or do you have some links where I can get deeper in the topic?

Thank you!

@jkhenning
Copy link
Member

Hi @Bunoviske,

If you already have a way to generate tokens (and a secret used to generate them), you would normally set up some layer in front of your endpoints (or the serving endpoints) that can parse any token provided with the call and determine if the call is allowed to reach the endpoint (or redirect to the appropriate endpoint). Common tools used for that are nginx (see here and here, for example) or envoy (see here for example)

@Bunoviske
Copy link
Author

Thank you, that is very helpful. Maybe you could also suggest this as a feature for ClearML: BentoML secure endpoint or Integrate BentoML server with WSGI

For me, it would much easier if I could do clearml-serving --token "jsnkjsagn", something like this. Let me know your thoughts on that :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants