Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Planning #23

Closed
dww100 opened this issue Oct 2, 2018 · 8 comments
Closed

Planning #23

dww100 opened this issue Oct 2, 2018 · 8 comments
Assignees

Comments

@dww100
Copy link
Collaborator

dww100 commented Oct 2, 2018

My idea is to setup three project boards with milestones associated based on 3 point releases aiming to create a version of EasyVVUQ for dissemination and external user testing. To that end I have created three issues for versions 0.1 (#20), 0.2 (#21) and 0.3 (#22).

The idea being we should agree on the targets for each version and then work towards creating and testing them. We should also aim to keep higher level discussion here (i.e. which testing/CI frameworks to use).

Once we decide on a plan I'll create individual tickets for each goal and then associate with the projects/milestones as appropriate.

@dww100
Copy link
Collaborator Author

dww100 commented Oct 2, 2018

Testing
I've had recommended the following packages:

  1. Pytest (https://docs.pytest.org/en/latest/)
  2. Behave (https://behave.readthedocs.io/en/latest/)
  3. Hypothesis (https://hypothesis.readthedocs.io/en/latest/)

Any thoughts?

@djgroen
Copy link
Contributor

djgroen commented Oct 2, 2018

@dww100 We are using Pytest with FabSim3, so using that will make it easier for VECMA members to switch development effort between the two tools.

@djgroen
Copy link
Contributor

djgroen commented Oct 2, 2018

@dww100 In terms of release dates, perhaps we can synchronize them with the core preliminary release dates, which occur every three months? The last one was a few days ago, and the next one would be in early January.

Would that schedule make sense? Then we can introduce EasyVVUQ as part of the next VECMAtk release, and ask current Alpha users to also perform tests on that toolkit.

@dww100
Copy link
Collaborator Author

dww100 commented Oct 2, 2018

@djgroen I think we should definitely aim to sync a version to that release schedule but these should happen in the next month or so. The goal (from my end at least) is that v0.1 is the minimal viable product - with a more or less stable design to the key elements - the Campaign and base classes for UQPs and VVPs. That should mean we have pretty well established 'contracts' to allow it to be plugged into other toolkit members.

@djgroen
Copy link
Contributor

djgroen commented Oct 3, 2018

@dww100 Would it be feasible to have v0.1 ready by October 17th, so that we can discuss and work towards integrating that part with FabSim3 during the VECMAtk meeting?

You could then aim for another release in late November, which would allow my new VECMA post doc Hamid to test it out (he starts Dec 1st), and then do a third release during the regular schedule in January?

@dww100
Copy link
Collaborator Author

dww100 commented Oct 3, 2018

@djgroen It is difficult to say. The main issue is that Robin is away for a week and two of the steps to needed for v0.1 are quite large design decisions.

That said the basics of how we intend at least UQPs to work is available in decoder-design which is ready now, but not approved/checked by Robin.

Once solutions to the issues in v0.1 are available I think other things could happen quickly. We should make a plan in mid-December about what the January release should look like.

@dww100
Copy link
Collaborator Author

dww100 commented Oct 4, 2018

@djgroen Do you have much experience of using pytest to create functional tests?

I've only ever seen it in terms of unit tests and integration tests. To my mind functional and integration tests are a) more likely to actually get written and b) somewhat easier to understand. What I like about Hypothesis and Behave is that they seem more centred on thinking of the story of use cases - the sort of thing that makes good functional tests.

@djgroen
Copy link
Contributor

djgroen commented Oct 4, 2018

@dww100 I know it is supported in Pytest, but I don't have direct experience with it. Vytautas Jancauskas would be the person to ask about that :).

In FabSim3 the difference between unit tests and functional tests is quite blurry, as one can be interpreted as the other in a lot of cases.

@dww100 dww100 closed this as completed Oct 30, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants