-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert manual integration tests to pytest tests #148
Conversation
Why these changes are being introduced: Previously, the README provided some instructions on how to manually test the deployed webhook handling lambda. However, it was somewhat difficult to set the required environment variables and ensure proper artifacts were present in S3. How this addresses that need: A new pytest marker `integration` was created that identifies some tests as "integration" tests, which can be run separately. These tests rely on some fixtures that allow the calling environment to utilize AWS credentials set in the developer machine to make calls to AWS to retrieve environment variables, and check the results of actions performed. These fixtures also upload files to S3 for PPOD and TIMDEX, such that those StepFunction invocations will have something to work with. This allows for running 'make test-integration' locally to perform these tests without the manual work previously required. Side effects of this change: * Nonex, except some testing fixtures were shuffled around to support the more nuanced handling of environment variables. Relevant ticket(s): #147 Closes #147
Pull Request Test Coverage Report for Build 6085878947
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works as expected, a few comments on structure and optional suggestions
tests/conftest.py
Outdated
|
||
|
||
@pytest.fixture | ||
def _set_integration_tests_env_vars() -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the comments help, this is a bit involved and long so you might consider breaking this up into smaller fixtures that are then called by this fixture for readability
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that's a good suggestion. This prompted utilizing a contextmanager
to temporarily reset the environment, for each integration test, which gaurantees it's torn down correctly as well.
Also broke the lambda config and SMS retrievals into their own functions. Hopefully you can look at this fixture now (changes coming in a new commit) and see that it's:
- using a context manager to temporarily reset the environment
- call some functions that get data from AWS and set to environment
- release all those changes when that test completes
tests/conftest.py
Outdated
""" | ||
s3 = boto3.client("s3") | ||
|
||
def check_and_upload_file(bucket, key): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure of the best practice on something like this but this design pattern feels weird (defining a function inside a fixture). It seems possible to just stick this at the top:
# Specify your bucket and key
fixtures = [
(
"dev-sftp-shared",
"exlibris/pod/POD_ALMA_EXPORT_20230815_220844[016]_new.tar.gz",
),
(
"dev-sftp-shared",
"exlibris/timdex/TIMDEX_ALMA_EXPORT_DAILY_20230815_220844[016]_new.tar.gz",
),
]
for bucket, key in fixtures:
But I don't have enough experience with this type of integration tests to have much confidence in commenting on this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call! Much easier to call via a loop, and can tighten up that S3 file check/upload code as well.
tests/conftest.py
Outdated
|
||
https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest.hookspec.pytest_collection_modifyitems | ||
""" | ||
# skip integration tests if WORKSPACE is not 'dev' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd move this into the docstring, I like this function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally I'd agree, but this hook is a built-in function for pytests, and is called just by virtue of being named pytest_collection_modifyitems
. Therefore, if we want to add other logic that should fire during this hook, it would also get added here.
But now that you mention it.. this behavior could be put into another function that this hook calls. Then, that function's docstring can explain what it does, and future logic / functions could be added to this hook if needed.
Thanks for flagging!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, easier to parse now! I'd encourage @jonavellecuerdo to run these tests as well to surface any other questions as we start down the path of these types of integration tests
Agreed. I'm still not convinced this pattern is a good one, but it feels like it moves in the right direction of converting those manual tests into something programmattic. |
|
||
#### GET request example | ||
#### Steps to run integration tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use numbering to enumerate these steps? I think it would help with the readability! :)
@ghukill I added some minor comments, but overall, I think this looks great! It is a significant improvement from the manual tests that preceded these updates. I don't have suggestions for improvement but will continue to reflect on these changes. Awesome work. 🚀 |
What does this PR do?
This PR adds some pytest integration tests to replace the need for running the manual tests previously defined in the application README.
As noted in the updated README, these new tests have some limitations:
Dev1
environmentAWSAdministratorAccess
role credentials must be set on developer machineWORKSPACE=dev
must be set, but no other environment variables are required, these are all retrieved from deployed contextA new pytest marker
integration
has been created that identifies these tests as "integration" tests, which can be run separately.These tests rely on some fixtures that allow the calling environment to utilize AWS credentials set in the developer machine to make
boto3
calls to retrieve actual, deployed environment variables, and check the results of actions performed. These fixtures also upload files to S3 for PPOD and TIMDEX, such that those StepFunction invocations will have something to work with.Helpful background context
Previously, the README provided some instructions on how to manually test the deployed webhook handling lambda. However, it was somewhat difficult to set the required environment variables and ensure proper artifacts were present in S3, making these tests potentially cumbersome to run.
How can a reviewer manually see the effects of these changes?
WORKSPACE=dev
either as a command line environment variable or via the.env
file.Observe that tests pass and step functions were invoked:
Includes new or updated dependencies?
NO
What are the relevant tickets?
Developer
Code Reviewer