Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEAT] Expand compilation tests #117

Open
2 tasks
aburrell opened this issue May 31, 2023 · 4 comments
Open
2 tasks

[FEAT] Expand compilation tests #117

aburrell opened this issue May 31, 2023 · 4 comments
Labels
enhancement New feature or request

Comments

@aburrell
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Current CI only tests compilation on linux

Describe the solution you'd like
Tests for

  • Latest Mac OS
  • Latest Windows OS

Describe alternatives you've considered
Not covering all recent OS

Additional context
These tests will also reduce the start up time for new users

@aburrell aburrell added the enhancement New feature or request label May 31, 2023
@aaronjridley
Copy link
Contributor

There are now a lot of tests in the test directory. Some of these could be run as standard tests that are run when pull requests happen. I don't know how to do this on github, though.

We would need to decide which tests are "important" and should be maintained long-term, and which were good for testing a feature and may not be critical.

@aburrell
Copy link
Contributor Author

Can you tell me how to run the tests from the command line? If so, I can add that to the yaml. The best way is if there's a script that provides a success/failure and we just run that. At the moment we should just run all of them. Later we can prune down.

@aaronjridley
Copy link
Contributor

In the tests directory, there are a bunch of sub-directories, each one has a test in it. Inside each of these directories is a shell script called something like "run_test.sh". You just have to run that. The PROBLEM is that there is no success or failure criteria at this point. All of the tests just make plots (assuming that aetherpy is installed in a given directory, even!), so they are not incredibly useful for automated testing - although the automated tests could just see if the code crashes or not (which is something..... just not much...)

So, I think that maybe one of the students could write something that maybe reads in the log file and does a comparison with another given log file and reports the difference between the two.

The problem with that is that we then have a ton of reference log files that will all need to be updated when something fundamental changes in the code.

@aburrell
Copy link
Contributor Author

That's still a start. We definitely need something at the top level that runs the tests in each directory, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants