You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
...so that I can ensure our software product is accurate and robust.
Additional Details
Acceptance Criteria
Given a feature request or bug identified in the registry-mgr When I perform an update to the code and merge a pull request Then I expect continuous integration to execute with regression tests to catch if the fix breaks an assumption.
Given a feature request or bug identified in harvest When I perform an update to the code and merge a pull request Then I expect continuous integration to execute with regression tests to catch if the fix breaks an assumption.
Engineering Details
This does not need to include a huge swath of regression tests, but it should at least include the base infrastructure and orchestration necessary to add regression tests over time.
Since you stuck my name on this, the scripts that I developed are actually the acceptance criteria translated from the ticket to python in my case.
One item that takes a ticket much longer to close is that the acceptance criteria drifts when the ticket enters review. Case in point is pds-api 54. While the changes matched the acceptance criteria of the ticket, the reviewer applied further tests beyond what was in the acceptance. In doing so, found some extra bugs. From a process perspective, should these items have first been in the acceptance criteria or should it have generated another ticket with the new criteria being its acceptance to fix the bugs? If the desire is to limit the life of any ticket, then need to have fixed criteria and new criteria should generate a new ticket. Otherwise, expect estimates to be way off because the acceptance criteria can change after the estimate has been made.
If acceptance criteria is encoded into a test (before or after a ticket is opened) then it can be used to ensure that functionality persists -- aka, regression testing.
I have an updated version for the latest version of the API, but it also needs to be updated depending on the test dataset that we have loaded (I am using the one loading in Al's containers).
I don't know if this should be maintained in the registry-api-service as it is now or rather in the pds-registry-app. I would lean toward the later since the test only make sense if the data is loaded in the elasticsearch.
Motivation
...so that I can ensure our software product is accurate and robust.
Additional Details
Acceptance Criteria
Given a feature request or bug identified in the registry-mgr
When I perform an update to the code and merge a pull request
Then I expect continuous integration to execute with regression tests to catch if the fix breaks an assumption.
Given a feature request or bug identified in harvest
When I perform an update to the code and merge a pull request
Then I expect continuous integration to execute with regression tests to catch if the fix breaks an assumption.
Engineering Details
This does not need to include a huge swath of regression tests, but it should at least include the base infrastructure and orchestration necessary to add regression tests over time.
here are some details on how to spin up the Docker container: https://github.com/NASA-PDS/pds-registry-app#docker
there is also another Docker container here: https://github.com/NASA-PDS/registry-api-service/#docker
there are also a couple test scripts generated by @al-niessner for the API as well: https://github.com/NASA-PDS/registry-api-service/tree/master/verify
Tasks
The text was updated successfully, but these errors were encountered: