Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use github actions to run tests #49

Conversation

Abdullah-Majid
Copy link
Contributor

  • add .github/workflows/run-test.yml
  • define config in yml file and run tests
  • remove .travis.yml

@pberkes
Copy link
Owner

pberkes commented Oct 13, 2022

Thank you for your contribution!

The workflow is failing at the moment: https://github.com/Abdullah-Majid/big_O/actions/runs/3237399647/workflow

@Abdullah-Majid
Copy link
Contributor Author

Thank you for your contribution!

The workflow is failing at the moment: https://github.com/Abdullah-Majid/big_O/actions/runs/3237399647/workflow

got a working version but I think the number of python versions we can use is limited to a select few - apologies for all the commits!

.github/workflows/run-tests.yml Outdated Show resolved Hide resolved
@pberkes
Copy link
Owner

pberkes commented Oct 15, 2022

After the change is made, it would be great to squash all commits into one, since the changes are limited

@pberkes
Copy link
Owner

pberkes commented Oct 19, 2022

Looking good, could you please squash the 11 commits down to 1?

@Abdullah-Majid
Copy link
Contributor Author

Looking good, could you please squash the 11 commits down to 1?

Yep will do after work today - apologies had a busy past couple weeks will start work on the readme issue too.

@pberkes
Copy link
Owner

pberkes commented Oct 19, 2022

No worries, this is open source after all

@Abdullah-Majid
Copy link
Contributor Author

Abdullah-Majid commented Oct 19, 2022

Python 3.10 seems to be failing in the pipeline - I rebased off the latest master so not sure why only 2/3 are passing
https://github.com/Abdullah-Majid/big_O/actions/runs/3284266596/jobs/5410032754

@pberkes
Copy link
Owner

pberkes commented Oct 28, 2022

Maybe some optimizations in 3.10 make the function np.sort run too fast to reliably measure its complexity. As the comment in the test says "Numpy sorts are fast enough that they are very close to linear"

I suggest adding a new dummy linearithmic function in test_big_o.py (below the other dummy functions).

def dummy_linearithmic_function(n):
    # Dummy operation with linearithmic complexity.

    # Constant component of linearithmic function
    dummy_constant_function(n)

    x = 0
    log_n = int(np.log(n))
    for i in range(n):
        for j in range(log_n):
            for k in range(20):
                x += 1
    return x // 20

I get reliable tests on 3.10 with this modified version of test_big_o:

    def test_big_o(self):
        # Each test case is a tuple
        # (function_to_evaluate, expected_complexity_class, range_for_n)
        desired = [
            (dummy_constant_function, compl.Constant, (1000, 10000)),
            (dummy_linear_function, compl.Linear, (100, 5000)),
            (dummy_quadratic_function, compl.Quadratic, (1, 100)),
            (dummy_linearithmic_function, compl.Linearithmic, (10, 5000)),
        ]
        for func, class_, n_range in desired:
            res_class, fitted = big_o.big_o(
                func, datagen.n_,
                min_n=n_range[0],
                max_n=n_range[1],
                n_measures=25,
                n_repeats=1,
                n_timings=10,
                return_raw_data=True)

            residuals = fitted[res_class]

            if residuals > 5e-4:
                if isinstance(res_class, class_):
                    err_msg = "(but test would have passed)"
                else:
                    err_msg = "(and test would have failed)"

                # Residual value is too high
                # This is likely caused by the CPU being too noisy with other processes
                # that is preventing clean timing results.
                self.fail(
                    "Complexity fit residuals ({:f}) is too high to be reliable {}"
                    .format(residuals, err_msg))

            sol_class, sol_residuals = next(
                (complexity, residuals) for complexity, residuals in fitted.items()
                if isinstance(complexity, class_))

            self.assertIsInstance(res_class, class_,
                msg = "Best matched complexity is {} (r={:f}) when {} (r={:f}) was expected"
                    .format(res_class, residuals, sol_class, sol_residuals))

Would you mind doing this changes? Thanks!

@pberkes
Copy link
Owner

pberkes commented Mar 3, 2023

Closing in favor of #56

@pberkes pberkes closed this Mar 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants