Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with create_presigned_url in Bitbucket pipeline using AWS S3 and Moto mocks #8116

Closed
c137santos opened this issue Sep 11, 2024 · 2 comments
Labels
debugging Working with user to figure out if there is an issue question

Comments

@c137santos
Copy link

c137santos commented Sep 11, 2024

What happens?

The test fails during the Bitbucket pipeline execution with a NoCredentialsError, even though I have mocked the AWS credentials using Moto and followed AWS's guidance for generating presigned URLs. Locally, the test works as expected, but in the pipeline, it consistently fails to locate the credentials.

What should happen?

The test should pass, as the credentials are properly mocked. The presigned URL generation should work with the mocked S3 client, just as it does in the local environment. The issue seems to be related to either the way the mocks are applied or an inconsistency in the pipeline environment, but the test logic should allow for proper mock resolution and successful URL generation.

Everything

I'm encountering an issue when running tests involving create_presigned_url in the Bitbucket pipeline. The presigned URL generation works locally, but it fails during the pipeline execution with a NoCredentialsError, even though I’ve mocked the AWS credentials and S3 client using Moto as suggested by the AWS documentation.

AWS Boto3 documentation on presigned URLs, and I’ve tried mocking everything I could related to AWS, but the issue persists in the Bitbucket pipeline.

Here’s the function that generates the presigned URL:

def create_presigned_url(bucket_name, object_name, expiration=3600):
    """Generate a presigned URL to share an S3 object
    
    :param bucket_name: string
    :param object_name: string
    :param expiration: Time in seconds for the presigned URL to remain valid
    :return: Presigned URL as string. If error, returns None.
    """
    object_name = remove_bucket_name(object_name)
    try:
        response = client.generate_presigned_url(
            "get_object",
            Params={"Bucket": bucket_name, "Key": object_name},
            ExpiresIn=expiration,
        )
    except Exception as e:
        logging.error(e)
        raise RuntimeError("Could not create presigned url.")
    return response

When I run my test in the pipeline, it throws the following error:

I have that model.

from database import BaseModel
from helpers.s3_helper import create_presigned_url
from settings import settings

class Image(BaseModel):
__tablename__ = "images"

id = sa.Column(sa.Integer, primary_key=True)

url = sa.Column(sa.String(128), nullable=False)


def to_dict(self):
return {
"id": self.id,
"url": create_presigned_url(settings.BUCKET_NAME_S3, self.url) if self.url else None,
}

TESTS:

@pytest.mark.asyncio
@mock_aws
async def test_get_monitoring_by_id(
async_client: AsyncClient, access_token_admin, db_session, aws, create_bucket_with_image, mock_presigned_url, mock_settings
):
url_image = create_bucket_with_image

img = Image(
url=url_image,
)

db_session.add(img)
await db_session.commit()

response = await async_client.get(
url=f"/api/v1/missions/monitoring",
headers={"Authorization": f"Bearer {access_token_admin}"},
)
assert response.status_code == HTTPStatus.OK

That error is:

bucket_name = '\''bucket_name_s3'\'', object_name = '\''test_image.jpg'\''
expiration = 3600
    def create_presigned_url(bucket_name, object_name, expiration=3600):
        """Generate a presigned URL to share an S3 object
    
        :param bucket_name: string
        :param object_name: string
        :param expiration: Time in seconds for the presigned URL to remain valid
        :return: Presigned URL as string. If error, returns None.
        """
    
        # Generate a presigned URL for the S3 object
        object_name = remove_bucket_name(object_name)
        try:
>           response = client.generate_presigned_url(
                "get_object",
                Params={"Bucket": bucket_name, "Key": object_name},
                ExpiresIn=expiration,
            )
helpers/s3_helper.py:30: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.venv/lib/python3.10/site-packages/botocore/signers.py:712: in generate_presigned_url
    return request_signer.generate_presigned_url(
.venv/lib/python3.10/site-packages/botocore/signers.py:349: in generate_presigned_url
    self.sign(
.venv/lib/python3.10/site-packages/botocore/signers.py:197: in sign
    auth.add_auth(request)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
self = <botocore.auth.S3SigV4QueryAuth object at 0x7f1ecfca6fb0>
request = <botocore.awsrequest.AWSRequest object at 0x7f1ecfcf3580>
    def add_auth(self, request):
        if self.credentials is None:
>           raise NoCredentialsError()
E           botocore.exceptions.NoCredentialsError: Unable to locate credentials
.venv/lib/python3.10/site-packages/botocore/auth.py:423: NoCredentialsError
During handling of the above exception, another exception occurred:
async_client = <httpx.AsyncClient object at 0x7f1f105cbfa0>
access_token_admin = ...
db_session = <sqlalchemy.orm.session.AsyncSession object at 0x7f1f1c5ed330>
aws = <botocore.client.S3 object at 0x7f1ecfee30a0>
create_bucket_with_image = '\''s3://bucket_name_s3/test_image.jpg'\''
mock_presigned_url = None
mock_settings = Settings(...)

My setup and conftest:

I’m using pytest with Moto to mock the S3 bucket and presigned URL.
The AWS credentials are mocked using the following fixture:

@pytest.fixture
def aws_credentials():
    """Mocked AWS Credentials for moto."""
    os.environ["AWS_ACCESS_KEY_ID"] = "foo"
    os.environ["AWS_SECRET_ACCESS_KEY"] = "boo"
    os.environ["AWS_SECURITY_TOKEN"] = "testing"
    os.environ["AWS_SESSION_TOKEN"] = "testing"
    os.environ["aws_region"] = "us-east-1"
    os.environ["bucket_name"] = "bucket_name_s3"


@pytest.fixture
def aws(aws_credentials):
    with mock_aws():
        boto3.setup_default_session()
        user = boto3.client(service_name='s3', region_name='us-east-1',
                              aws_access_key_id='foo', aws_secret_access_key='boo')
        yield user


@pytest.fixture
def create_bucket_with_image(aws):
    bucket_name = "bucket_name_s3"
    aws.create_bucket(Bucket=bucket_name)

    image_content = b"this is a test image"
    image_key = "test_image.jpg"
    aws.put_object(Bucket=bucket_name, Key=image_key, Body=image_content)

    image_url = f"s3://{bucket_name}/{image_key}"
    return image_url

def mock_generate_presigned_url(*args, **kwargs):
    return "https://mock-presigned-url"

@pytest.fixture
def mock_presigned_url(monkeypatch):
    boto3.setup_default_session()
    user = boto3.client(service_name='s3', region_name='us-east-1',
                            aws_access_key_id='foo', aws_secret_access_key='boo')
    monkeypatch.setattr(user, 'generate_presigned_url', mock_generate_presigned_url)


@pytest.fixture
def mock_settings():
    settings.BUCKET_NAME_S3 = "bucket_name_s3"
    yield settings

I’ve already tried with setup_default_session and also tried yielding boto3.client("s3", region_name="us-east-1"). Maybe I'm confused about how it works, but either way, it might not be a problem with the library, but rather me mixing solutions.

I'm using

[[package]]
name = "moto"
version = "5.0.14"
description = ""
optional = false
python-versions = "=3.10"

[package.dependencies]
boto3 = ">=1.9.201"
botocore = ">=1.14.0"
cryptography = ">=3.3.1"
Jinja2 = ">=2.10.1"
python-dateutil = ">=2.1,<3.0.0"
requests = ">=2.5"
responses = ">=0.15.0"
werkzeug = ">=0.5,<2.2.0 || >2.2.0,<2.2.1 || >2.2.1"
xmltodict = 
@bblommers
Copy link
Collaborator

Hi @c137santos, welcome to Moto! One potential problem that I can think of: if boto3 is instantiated before the aws_credentials-fixture is called, then it doesn't have access to any credentials - hence the error. That problem will not occur when running it locally, because it uses whatever credentials you have configured locally.

Two possible solutions:

  • Move the imports in the app/test around, so that you're absolutely sure that aws_credentials is invoked before anything else
  • Set some fake credentials as part of the pipeline run itself. I'm not familiar with BitBucket, so I don't know how - but I'm sure there's a way to configure environment variables

@bblommers bblommers added question debugging Working with user to figure out if there is an issue labels Sep 12, 2024
@bblommers
Copy link
Collaborator

Due to the lack of respones I'm going to assume this is now fixed, so I'll close this. Happy to have another look if you have any other questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
debugging Working with user to figure out if there is an issue question
Projects
None yet
Development

No branches or pull requests

2 participants