Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initial unit tests and automated CI #48

Merged
merged 1 commit into from Jan 21, 2024
Merged

Add initial unit tests and automated CI #48

merged 1 commit into from Jan 21, 2024

Conversation

odow
Copy link
Collaborator

@odow odow commented Jan 20, 2024

Closes #47

@ccoffrin
Copy link
Member

Thanks for starting to set this up @odow. I presume this is a WIP, but here are my initial thoughts.

  • As much as possible I would like to put code related to testing and validation into the test directory or some other place that is not the root of the repo.
  • I do already have this "runtest.jl" file, which I used as a first pass on regression testing. Maybe we can add CI for that? So far, I have been running it by hand on each update.
  • If we are going to add more example case files we should probably pick that subset carefully and also update all cases to the latest version of PGLib (which is what I use when generating results)
  • Maintaining the "target objectives" could become a pain as PGLib gets updated, we should brainstorm if we can come up with better alternatives.

In the cli branch, which is targeted more towards building a results table, I think it would be reasonable to add more infrastructure and or data related to running and validating runs on all of PGLib.

@odow
Copy link
Collaborator Author

odow commented Jan 21, 2024

Yeah still a WIP. I'll refactor into test

@odow
Copy link
Collaborator Author

odow commented Jan 21, 2024

Is there reason to maintain a separate cli branch?

@odow
Copy link
Collaborator Author

odow commented Jan 21, 2024

Just working on a test that validates the solutions are feasible. Because of model differences (like l <= f(x) <= u or f(x) >= l; f(x) <= u or f(x) - l >= 0) we can't easily test a canonical vector of the constraint outputs.

@ccoffrin
Copy link
Member

Is there reason to maintain a separate cli branch?

Along the lines of "rosetta code" I have thus far have wanted the primary focus of this repo to be, "this is how you implement AC-OPF in all these modeling layers". So learning, this is how I do a,b,c, in modeling layer x. The cli branch is the "extra stuff" to do some command line automation, which I expect I am the only user of. That branch also includes using HSL ma27, which is a pain to setup right? Maybe there is a better way with HSL.jl?

So my feeling is that this still is good to be a separate branch, but open to counter arguments.

Just working on a test that validates the solutions are feasible. Because of model differences (like l <= f(x) <= u or f(x) >= l; f(x) <= u or f(x) - l >= 0) we can't easily test a canonical vector of the constraint outputs.

Correct. Along the lines of the first note. I would like each modeling layer to be free to show a "preferred way" to model the problem; both mathematically and stylistically. This may lead to different ways of presenting the constrains.

However, if we really want this feature, we could write a modeling layer agnostic feasibility test, which takes as inputs just the canonical decision variables (p,q,v,\theta) and returns a standardized vector of constraint and objective evolutions.

@odow
Copy link
Collaborator Author

odow commented Jan 21, 2024

However, if we really want this feature, we could write a modeling layer agnostic feasibility test, which takes as inputs just the canonical decision variables (p,q,v,\theta) and returns a standardized vector of constraint and objective evolutions.

This is what I did I the latest commit

@odow
Copy link
Collaborator Author

odow commented Jan 21, 2024

I'm playing whack-a-mole with the CI though. I don't understand what it's doing

jump.jl Outdated Show resolved Hide resolved
@odow
Copy link
Collaborator Author

odow commented Jan 21, 2024

Tests pass locally:

Test Summary: | Pass  Total   Time
Rosetta OPF   | 5628   5628  39.0s

I didn't find any mistakes in the implementations.

test/runtests.jl Outdated Show resolved Hide resolved
@ccoffrin
Copy link
Member

I didn't find any mistakes in the implementations.

I am not surprised. AC-OPF is such a finicky problem the objective function value basically functions as a hash for the solution vector.

@odow
Copy link
Collaborator Author

odow commented Jan 21, 2024

Yeah but I guess by not testing known solutions we're not checking whether the solver returns a minima, just primal feasibility.

@ccoffrin
Copy link
Member

Good point, verifying that you have a KKT point would be the next check to make, but feels like overkill to me. From at Mathprog perspective citifying local optimality is valuable, but I think most AC-OPF researchers are happy with any feasible point, so I would not want to exclude solvers that cannot certify local optimality.

@odow
Copy link
Collaborator Author

odow commented Jan 21, 2024

Okay, I think I got the CI running for the main ones. I'll look at the variants as well

@ccoffrin
Copy link
Member

If it is easy to configure, I would make passing CI on the variants optional. But always good to have visibility on these things.

@odow odow merged commit af5b600 into main Jan 21, 2024
1 check passed
@odow odow deleted the od/test branch January 21, 2024 06:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add unit tests to each implementation
2 participants