Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Authenticated Access using Lambda@Edge #68

wants to merge 16 commits into
base: master


Copy link

commented Oct 1, 2019

As discussed in #67, here is my draft support for requiring HTTP basic authentication when fetching packages from the package repository.

The main points:

  1. The client authentication takes place in a CloudFront Lambda@Edge function (defined in s3pypi_auth/ This function also appends index.html to the request path if the client request ends with /.
  2. Since Lambda@Edge functions must be deployed in the N. Virginia region, I had to split the CloudFormation templates into 3 stacks:
    • cloudformation/s3-pypi-template.yaml creates the bucket and the bucket access policies as well as the OriginAccessId refernced by the bucket's policy document.
    • cloudformation/s3-pypi-auth-template.yaml specifies the SAM template for the deployment of the Lambda@Edge function.
    • cloudformation/s3-pypi-cloudfront-template.yaml creates the CloudFront distribution.
  3. Short of preprocessing the code and hard-coding the parameters, I did not find a way to configure the lambda function at deployment time. (Lambda@Edge does not support environment variables.) I therefore decided to keep the user store in the same bucket as the packages, but under a dedicated prefix config/. The packages are all assumed under the prefix packages/. The Lambda@Edge function unconditionally prepends packages/ to all package requests. Managed policies are created to control who can upload packages and who can manage client credentials.
  4. The subpackage s3pypi.admin implements a CLI for managing the entries of the user store.
  5. The subpackage s3pypi.infrastructure implements a script for the deployment of all 3 stacks; without this script, copying and pasting the output of one stack into the parameters of the next stack is too error-prone, IMHO.

Issues I am aware of:

  • Obviously, tests are missing.
  • The implementation assumes Python >= 3.6; I gather you still want to support Python 2.7.
  • When I uploaded packages during my manual tests, I had to add the option --secret packages/ when I called s3pypi. Of course, it would be easy to make s3pypi check the bucket for the presence of a config file in, say, config/s3pypi.conf that, if present, defines a prefix automatically added to the package file keys.
  • On second thought, it makes no sense to keep s3pypi.infrastructure inside the s3pypi package that is installed by developers who only want to interact with the existing repository and who most likely downloaded the package from a repository as well. Therefore, the script should be extracted from s3pypi into a directory of its own.
  • Only after writing this code, I became aware of AWS CDK. I played with it a bit, but it was not evident to me that it would simplify the setup in this case. It could be that I missed the perfect usage pattern, though...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
1 participant
You can’t perform that action at this time.