Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update unit tests #177

Closed
22 of 26 tasks
ehmatthes opened this issue Nov 4, 2022 · 13 comments
Closed
22 of 26 tasks

Update unit tests #177

ehmatthes opened this issue Nov 4, 2022 · 13 comments
Labels
stability Consistent, predictable behavior testing

Comments

@ehmatthes
Copy link
Owner

ehmatthes commented Nov 4, 2022

Many new issues have been opened after some exposure at Djangocon. Most of these issues should have failing tests written before addressing the relevant bug. It's worth cleaning up the unit tests before adding more tests. There was a brief period where unit tests were failing, I believe that was when the sample project was not fully updated.

  • Document current unit test structure in this issue thread.
    • Sort issue of running unit test, looking at tmp project and seeing tests pass but not seeing modifications. (Not conclusive about previous issue, but much clearer understanding of current behavior, and grounding for troubleshooting any issues that come up with unit tests.)
  • Outline what we want to accomplish in the unit test suite.
    • Test the CLI, without touching a project.
    • What kinds of projects do we want to test against?
    • What versions of Django do we want to test against?
    • What artifacts of older versions do we want to test against? ie, see Error detecting BASE_DIR as string #175
    • How will we manage testing multiple dependency management approaches?
    • Can tests be parallelized?
  • Restructure unit tests.
    • Move platform-agnostic tests to a dedicated directory, even if one file per directory.
    • Move platform-specific tests to their own directory, even if one file per directory currently.
    • Consider moving shell scripts to a specific directory, if they don't fit cleanly into one of the above directories.
    • Make a dedicated test module for invalid CLI calls.
    • Support a cleaner syntax for testing a single platform, ie pytest fly_io or pytest --platform fly_io. (Final: pytest platforms/fly_io/)
    • Reuse the same test project for all test modules, including all the different platforms.
    • Make a quick assessment of how easily pytest-xdist could be used to start parallelizing tests. (Definitely not needed at this point; all tests pass quickly after initial setup work is done.)
  • Document unit tests on RtD. (Remaining tasks moved to List of improvements #7.)
    • Document how to run unit tests on RtD.
    • Document how to extend unit tests?
    • Use this documentation to start unit tests for fly_io.
    • Document goals of unit testing.
    • Document how the structure of the unit tests support meeting these goals.
    • Include references to relevant parts of pytest docs.
    • List expectations for unit tests for a new platform: test invalid calls, test idempotency, test configuration.
  • Resolve a recently opened issue using unit tests appropriately, to verify that the unit test structure supports good bug squashing practice. (This will be fun, and no need to keep this issue open for this task.)
@ehmatthes ehmatthes added testing stability Consistent, predictable behavior labels Nov 4, 2022
@ehmatthes ehmatthes mentioned this issue Nov 4, 2022
7 tasks
@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 4, 2022

Current unit test structure

All tests pass. Heroku only takes ~15 sec, platform.sh ~12 sec, pytest takes ~24 sec.

  • Document exactly what happens when running simple pytest.
    • Do I see multiple new sample projects?
    • Do I see heroku modifications and platformsh modifications?

What happens when running unit tests

  • pytest: Calling pytest calls each file that starts with test_. So it calls test_platformsh_config.py and test_heroku_config.py. These are similar, so let's look at one.
  • pytest test_platformsh_config.py: We can call just one test file, ie pytest test_platformsh_config.py. When we issue this command, pytest looks for any conftest.py files in the directory. There is one, so it's run.
  • conftest.py: All that conftest.py contains is a single fixture, tmp_project(). This is a module-level fixture; I believe it's created once for each test module that's run; ie for a full run it would be created once for the test_platformsh_config.py and once for test_heroku_config.py.
    • The tmp_project fixture creates a tmp dir in a location that pytest can write to.
    • conftest.py then calls setup_project.sh, which:
      • Copies the sample blog project to the tmp dir.
      • Removes all dep mgt files except requirements.txt.
      • Creates a venv and installs dependencies from vendor/.
      • Adds simple_deploy to INSTALLED_APPS, just like a user would.
      • Initializes a git repository, and makes an initial commit.
    • conftest.py then calls call_sd_no_platform.sh.
      • This is a simple way to test the cli when used without the --platform argument.
    • conftest.py then returns tmp_proj_dir; this is a pathlib.Path object.
  • test_platformsh_config.py: Finally, this file is run. There are two fixtures:
    • run_simple_deploy(): module scope, it calls simple deploy with the appropriate platform argument. It does this by calling call_simple_deploy.sh. It's easier to do this in a shell script than within the .py file.
    • settings_text(): module scope, it reads the freshly modified settings.py file from the test project.
    • The remaining tests examine different aspects of the test project, such as changes to settings.py, and the addition of platform-specific files such as .platform.app.yaml.

After test runs

  • After the test run, the project should be available for examination. I believe there should be one project for each module that used the root conftest.py file.
  • Look at what's currently in pytest temp dir location.
pytest-of-eric % tree -L 2
.
├── pytest-10
│   ├── blog_project0
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-10/blog_project0
├── pytest-11
│   ├── blog_project0
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-11/blog_project0
├── pytest-9
│   ├── blog_project0
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-9/blog_project0
└── pytest-current -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-11
  • Run pytest test_platformsh_config.py, and look at what's created in temp dir location.
    • One new folder, and pytest-current points to this folder.
pytest-of-eric % tree -L 2
.
├── pytest-10
│   ├── blog_project0
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-10/blog_project0
├── pytest-11
│   ├── blog_project0
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-11/blog_project0
├── pytest-12
│   ├── blog_project0
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-12/blog_project0
└── pytest-current -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-12
  • Run pytest, and look at what's created in temp dir location.
    • One new directory, containing both of the newly-created tmp_proj_dir.
    • pytest-current contains all new proj dirs from the most recent test run.
pytest-of-eric % tree -L 2
.
├── pytest-11
│   ├── blog_project0
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-11/blog_project0
├── pytest-12
│   ├── blog_project0
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-12/blog_project0
├── pytest-13
│   ├── blog_project0
│   ├── blog_project1
│   └── blog_projectcurrent -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-13/blog_project1
└── pytest-current -> /private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-13
  • Do I see heroku modifications and platformsh modifications?
    • Yes:
pytest-of-eric % ls -alh pytest-12/blog_project0
total 80
drwxr-xr-x  16 eric  staff   512B Nov  6 08:15 .
drwx------   4 eric  staff   128B Nov  6 08:15 ..
-rw-r--r--   1 eric  staff   6.0K Oct 26 17:30 .DS_Store
drwxr-xr-x  12 eric  staff   384B Nov  6 08:15 .git
-rw-r--r--   1 eric  staff    69B Feb 20  2022 .gitignore
drwxr-xr-x   3 eric  staff    96B Nov  6 08:15 .platform
-rw-r--r--   1 eric  staff   2.0K Nov  6 08:15 .platform.app.yaml
drwxr-xr-x   6 eric  staff   192B Nov  6 08:15 b_env
drwxr-xr-x   9 eric  staff   288B Nov  6 08:15 blog
drwxr-xr-x  14 eric  staff   448B Nov  6 08:15 blogs
-rw-r--r--   1 eric  staff   285B Nov  6 08:15 installed_packages.txt
-rwxr-xr-x   1 eric  staff   660B Oct 24 22:14 manage.py
-rw-r--r--   1 eric  staff   414B Nov  6 08:15 requirements.txt
drwxr-xr-x   3 eric  staff    96B Nov  6 08:15 simple_deploy_logs
-rw-r--r--   1 eric  staff    12K Oct 24 22:22 test_deployed_app_functionality.py
drwxr-xr-x  13 eric  staff   416B Oct 24 22:23 users

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 5, 2022

Probable tasks

  • Move initial commit to before addition of simple_deploy to INSTALLED_APPS, just like we expect users to do.
  • Use tmp_path_factory() instead of tmp_dir_factory()? See pytest doc
  • Test the CLI more efficiently than what's described above.
    • Call the CLI in all invalid ways before running a test that should modify the sample project.
  • Maybe create a test module test_invalid_cli_commands.py. Test all likely invalid commands?

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 6, 2022

Conclusions

  • If you run pytest and see a failure for a platform and you're not sure where to look, simplest approach may be to re-run unit tests targeting only that platform, and know that you're not looking at test projects from any other platform. This is probably good advice regardless of how the unit test suite evolves.
  • pytest -x, equivalent to pytest --exitfirst, is helpful when diagnosing unit test failures. This is especially so when there's an issue with setup or management of the test project, rather than how simple_deploy behaves.
  • The original conftest.py, which creates a copy of the full sample project for each platform test module is now in platforms/. This means it's not run for any set of platform-agnostic tests, such as the invalid CLI calls tests.
  • Structure of testing invalid CLI calls is really nice, because each test can use the exact invalid call you want to guard against, not cluttered by the --unit-testing argument that end users will never include.
  • Important to check that invalid commands not only return an error message, but also that they don't modify the project at all, including opening a log file.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 6, 2022

Outline what we want to accomplish in the unit test suite

  • Test invalid attempts to use the CLI, without modifying a project.
    • We need a project to test against, but this is a set of tests to catch the most common ways people might unintentionally misuse the API. ie calling it without a platform argument, or calling with an invalid platform argument.
    • It may be worth making a tiny sample project, with no db or anything, for testing this. The only requirement would be django.
      • No, a minimal project with just the Django "it works" page saves no meaningful time. We're better off making the sample project once with a session scope, and then resetting it rather than recreating it for each test module.
  • What kinds of projects do we want to test against?
    • The sample blog project is the classic "simple" project we want to support deployment for. This is a project that uses a database, has user accounts, has an admin, and has some basic styling.
    • We want to test non-nested projects that you get from running django-admin startproject blog .
    • We'd also like to test nested projects that you get from running django-admin startproject blog
    • Maybe:
      • Wagtail projects?
      • DRF projects?
      • Other common django use cases?
  • What versions of Django do we want to test against?
    • For reference, see the official Supported Versions, and Jeff's Django Release Cycle.
    • I think it's fine to support just the currently supported versions. At this time (11/6/22), that would be 3.2, 4.0, and 4.1. It's enough to just test the latest point release of each of these versions.
    • It's a higher priority to figure out testing multiple dependency management systems than multiple Django versions, so I'll come back to this a little later.
    • There may be some aspects of older versions that are worth supporting. For example if someone built a project with a version of Django that's not currently supported, they've updated their project but some artifacts such as settings are older, we may be able to support those. ie, see Error detecting BASE_DIR as string Error detecting BASE_DIR as string #175
  • How will we manage testing multiple dependency management approaches?
    • Probably keep a base project using requirements.txt, which seems to be a common format that can be used to generate all the other dependency management files. At testing time, we can generate the appropriate files for each dependency management system.
    • Could also keep all relevant files in the base project, and remove the ones required for the current test. This would probably be much more efficient, especially for the slower systems like Poetry and Pipenv.
    • This is taking shape:
      • sh utils/reset_project.sh -d req_txt
      • For the actual reset call, use a git tag. In setup_project.sh make a tag after the initial commit, then reset to that tag, remove any extraneous files that have been created, and add simple_deploy to INSTALLED_APPS again.
      • Then, parametrize the test to call this with ["req_txt", "poetry", "pipenv"], etc.
  • Can tests be parallelized?
    • Relevant SO discussion.
    • See also pytest-xdist.
    • I'm not sure how much work it will be to parallelize tests that depend on a test project. It might need module-level parallelization, or organizing tests in a way that doesn't cause problems with parallelization.
    • Parallelization overhead is probably not worthwhile at this point, but will likely be worthwhile as the test suite grows. (See below.)
  • What should the testing api look like?
    • ie, how specify the platform you want to test against?
    • How specify the dependency management system you want to test against?

@ehmatthes
Copy link
Owner Author

Try pytest-xdist

  • Before restructuring tests, just quickly install pytest-xdist, then pytest -n auto.
    • Before, took about 26s to run all tests. First run with -n auto took 52s. :)
    • I believe this is an issue of it taking longer to set up parallelization than it's worth at this point. Parallelization might be more significant as the test suite grows.
    • pytest -n 2 took 34s. I believe on macOS, -n auto does not make the best decisions about how many cpus to use. (intel macOS).

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 7, 2022

Restructure unit tests

  • Move platform-agnostic tests to a dedicated directory, even if one file per directory.
  • Move platform-specific tests to their own directory, even if one file per directory currently.
    • Under a platforms/ directory.
  • Clean up sd_root_dir = Path(__file__).parent.parent.parent.parent in test_platformsh_config.py.
    • .parents[3] :)

@ehmatthes
Copy link
Owner Author

Support a cleaner syntax for testing a single platform

  • Moving all platform-specific tests to a single directory makes this easy. To run fly_io tests, now use pytest platforms/fly_io/

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 8, 2022

Test invalid CLI calls efficiently

  • Create a minimal Django project to test against.
  • Maybe create a test module test_invalid_cli_commands.py. Test all likely invalid commands?
    • Test call with no --platform argument.
    • Clean up diagnostics.
    • Check git status.
    • Make -a flag optional.
    • Consider including a version of this in all platform-specific modules as well? (Seed in heroku?) (later)
    • Consider moving helper methods to a file in the utils/ dir. (later)
    • Test call with an invalid --platform argument.
    • Make it work for both tests.
    • Move --unit-testing to a sed call in call_simple_deploy.sh.
    • Refactor work to run the invalid command.
  • Briefly research whether I can create a more minimal venv for testing.
  • Get rid of call_sd_no_platform.sh.
  • Include references to pytest docs about capturing output

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 8, 2022

Working notes - testing invalid CLI calls

  • Run git status after invalid call, and make sure no changes?
    • May need to make a module-level conftest.py, or maybe a fixture in this module, that calls git commit before any tests in this module are run. That should give a clean status to start. Could try a call that does modify the project to see if this check actually works. Or make a change to simple_deploy.py that results in an unclean git status for an invalid command, and see if the tests fail.
  • How write a test where we expect the shell script to produce an CLI error message?
    • Capture output, and make assertions against captured.out and captured.err.
  • I'm not having fun parsing a bash argument with spaces. Try $1 and $2 instead of OPTARGS?

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 9, 2022

Use this documentation to start unit tests for fly_io

Documentation notes:

  • If adding a brand new unit test, you need to add a block to the end of call_simple_deploy.sh.
  • Be aware that some platforms' tests may be better structured/ refactored than others, especially pre 1.0.

Working notes:

  • Modify deploy_flyio.py to support unit tests.
  • Replace template variables before making assertions in settings test.
  • Consider a set of reference files for testing against, rather than verifying by building files from templates. This would be much easier to reason against.
    • Use for testing .dockerfile.
    • Use for testing other fly.io-specific files.
    • This may be problematic for requirements files such as requirements.txt, but probably won't be an issue with strict use of vendor/ for installations in unit tests.
    • Refactor code for comparing generated files to reference files.
    • Consider a better way of comparing two files than reading text directly. filecmp docs
    • Use a direct file comparison for the settings test.
    • Check other project files that get modified: .gitignore
    • Move this file to utils/.
    • Use this pattern in heroku tests.
      • I don't see a Procfile in blogproject0/? (Test for Procfile was not using run_simple_deploy fixture.)
    • Use this pattern in platform.sh tests.
  • Consider pulling tests that are the same for all platforms into a shared directory; maybe into the platforms/ dir, but not in any specific platform? Maybe in utils/? (Not worthwhile now, there are just three common one-line tests.)
  • Run through List of improvements #7 and see if this work addresses anything in the testing lists.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 9, 2022

Reuse the same test project for all test modules

  • Establish a rough benchmark for test duration. (22 passed in 52.18s)
  • Add a git tag to the initial commit. (Already there, INITIAL_STATE. This was used in modify_allowed_hosts.sh, in unit_tests/platforms/heroku/.)
  • Write a reset_test_project() script or fixture.
  • Change main fixture in conftest.py scope to 'session'; one platform's tests should fail, one platform's tests should pass. (Yes, fly passes and platformsh fails.)
  • Make sure simple_deploy is run for each test module.
    • For development, use pytest platforms/fly_io platforms/platform_sh -x. (13 passed in 27.33s)
  • Place reset_test_project() so it runs for all test modules, including platform agnostic tests.
  • Make sure pytest passes, and is significantly faster. (22 passed in 16.54s)

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 10, 2022

Document unit tests on RtD

  • Document how to run unit tests on RtD.
  • Document goals of unit testing. (No, not needed.)
  • Include references to relevant parts of pytest docs.
  • Document how the structure of the unit tests support meeting these goals.
    • See if I can simplify how many test functions include fixtures. ie, if every test function loads run_simple_deploy, which loads tmp_project, do they all need to load both explicitly? (Every function was using run_simple_deploy, so set that to autouse=True and remove it from all test function definitions.)
    • Set autouse True in platformsh, and flyio tests.
    • Consider setting autouse true for tmp_project, and could remove that from all function calls. (This works well because it's a session-level scope, so it runs first, and then only provides a path to every test function in every module. If a test doesn't use it, there's no harm done.)
    • How does test_invalid_cli_commands.py work when fixture uses tmp_project, and other use tmp_proj_dir??? Try changing name of return value in tmp_project? (Not an issue, I was looking at helper functions, not actual test functions.)
    • Move run_simple_deploy() fixture to conftest.py, so it's not a different version in every platform's file. It's in four places now!
    • Do fixtures belong in conftest.py? Yes.
  • Document how to extend unit tests? (later, see List of improvements #7)
  • List expectations for unit tests for a new platform: test invalid calls, test idempotency, test configuration. (later, see List of improvements #7)
  • Remove old unit test docs from old_docs/.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 10, 2022

Working notes - document unit tests

  • It would probably be helpful to merge all the unit test work to main, pull and then resume documentation. This way I can test documentation against the actual instructions I'm writing.
  • Consider setting autouse true for tmp_project, and could remove that from all function calls.
    - Doesn't seem to be working; does an autouse fixture that returns something make that something available to all functions??? Make a simple test file with its own autouse fixture to clarify this?
    - After further reading, autouse works for setup work, but doesn't automatically pass a return value to test functions. So it works for reset_test_project(), but not for tmp_project().
  • Move run_simple_deploy() fixture to conftest.py, so it's not a different version in every platform's file. It's in four places now!
    • How can I know which module is loading the fixture? (Need to specify the target platform) See the request fixture, specifically request.path.
    • How can I set a module to not use an autouse fixture? (test_invalid_commands.py shouldn't use it.) Many ways; I used a logical test based on whether a platform was identified or not. This could be modified to an "if request.module in" or something that lets us name specific modules or directories to skip.
    • Note: Higher-scoped fixtures run first, so tmp_project will always be run before fixtures like reset_test_project. source

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stability Consistent, predictable behavior testing
Projects
None yet
Development

No branches or pull requests

1 participant