Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFE: test metadata (L1) setup and cleanup #825

Open
jscotka opened this issue Jul 8, 2021 · 4 comments
Open

RFE: test metadata (L1) setup and cleanup #825

jscotka opened this issue Jul 8, 2021 · 4 comments
Assignees

Comments

@jscotka
Copy link
Collaborator

jscotka commented Jul 8, 2021

Hi, we've found that it would be very useful, that it would be nice to have something like prepare but for test metadata (L1)
so that there is test: and I propose also to be able to declare also setup and cleanup to run before and after test.

  • First PoC should be very easy to implement (more less same as test handling)
  • It could extend behaviour of TMT to be able to distinguish when preparation or test failed for any test type. not just beakerlib test what handles it internally.
  • It could add conditions, if continue with execution or not. e.g. if preparation fails, if still continue with test, or condition if call cleanup if setup failed.
  • I can imagine that behaviour could be clever and e.g. in case setup is set upper in FMF so that optimise calling of setup and cleanup scripts to avoid calling it more times than expected (could be also configurable by some condition, if call it once or everytime)

UserStory: I want to distinguish if preparation of environment for test failed.

  • Currently It is possible to have this part as part of test itself, what is probably bad idea
  • Another solution is to move it to prepare of plan and then just not exectute test if preparation fails, but some of these preparation are more closer to tests than plans.
  • Theoretical solution is also to have plan of plans what will also solve this situation. But probably it is more powerful for this usecase. But also missing cleanup for this. because if it would be expected to run more plans on one machine, probably prepare will need some cleanup counterpart inside plan definition. to do some steps in case something is more destructive than expected. Or in case this is not the case and plan of plans will just collect results into single point then it is not necessary. question is what is intended by plan of plans. if integration of more tests on one env, or just collect and report tests from various plans.
  • Or I can imagine also somehow connect tests together, and add there some type spefification by extension of link, e.g. link: {setup-by: /path/to/setup/test, cleanup-by: /cleanup/test} but probably it then needs more clever logic inside to parse and optimise test execution order and differenciate test types, but it seems like very very flexible, but harder solution, applicable.
    • It reminds me, that it could be also way how to implement plan of plans via some link: [ {implemented-by: url//to/another/plan1}, {implemented-by: url//to/another/plan2}] where url could be also some url and ref to indentify also git and proper commit hash or branch or tag.
@jscotka
Copy link
Collaborator Author

jscotka commented Jul 14, 2021

@psss few examples for you:
content of curl.fmf

test: ./runtest.sh
summary: Test curl against
cleanup: ./stop_server.sh

/apache:
  summary+: apache 
  setup: ./setup_apache.sh
/ngnix:
  summary+: ngnix
  setup:
    - ./setup_ngnix.sh
    - ./setup_usera.sh
   cleanup+: remove_users.sh

Pros:

  • With same test for curl I can test it against apache or ngnix server very easily.
  • When setup of e.g. apache fails it does not make sense to run the test itself. because preparation is invalid
  • May report WARN or ERROR, not FAIL without beakerlib, where it is supported
  • It is simple solution, easy to implement
  • allow connect setup with test instead of preparation in plan

Cons:

  • specific just to tests.

It may also contain more generic than just contain command to execute.
I can imagine some syntax like:

link:
   setup-by:
     url: url://github/user/repo
     ref: devel_branch
     [id|filter]: /some/path/regexp or some filter of the test

what will identify, that you have to run some test before this one and the mentioned test will work as setup test, and in case this setup test fails it does not make sense to run this test (with some good defaults, no url meas this repo, no ref means this branch, so theoretically just test id or some filter is necessary.) one setup may request another setup (DDT idea)

Pros:

  • it is generic graph implementation (some constrains to avoid graph problems)
  • allow very flexible referencing (potentially linking remote repositories)
  • when we solve it for this for setup, it could be then used generally, theoretically, resolving tuples of (X: and X-by:)
  • and generic resolver of depepncecies between tests and plans.

Cons:

  • will need some more complex linting
  • ensure that order will be same by some rules to have repeatable output
  • need to identify type, that some test is setup

Theoretically it will solves also plan of plans. Finally it may any plan may reference another plans, so that plan may reference plan what will reference another plan.

e.g. daemons-tier.fmf:

link:
   - implemented-by:
     url: git://some/git/repo/component1
     filter: tag:Tier1
   - implemented-by:
     url: git://some/git/repo/component2
     id: /plans/tier1

Last question is if these setups will be executed inside execute or prepare step. My idea is that it does not matter so much.
But with current implementation when preparation step will fail, it will not execute execute of tests, so that probably easier to be part of execute step and just hadle tests status accoring to setup/test/cleanup part. also execute is probably more suitable, because I think that preparation is more usefull and tight to test env preparation and handled by plans, and excution is tight to each test and setups for them.

@adiosnb
Copy link
Collaborator

adiosnb commented Feb 23, 2023

This proposal may be interesting, especially if beakerlib, pytest, or any different test framework is omitted. If a test is split into four generic steps: setup, exercise, verify, and teardown, all of them are somehow implemented in test frameworks (e.g., beakerlib). However, the is no easy possibility to define these steps generically in the tmt test without these frameworks.

The exercise and verify steps are part of a script under the test: keyword. This proposal would add the remaining steps (setup, teardown/cleanup) in the tmt test and allow to share and extend scripts between tmt tests.

+1 for me

test keyword in a tmt test is a shell command. Thus, I would also propose to have shell commands in setup and cleanup entries. Of course, the FMF ID would create a more robust solution, but it also would complicate the implementation. And if your test is very complicated and requires references to other tests for setup and cleanup by FMF ID, you can create tmt plan, which should already support these references.

@adiosnb adiosnb self-assigned this Mar 21, 2023
@adiosnb
Copy link
Collaborator

adiosnb commented Mar 24, 2023

The tmt developers agreed to implemented the feature in the last discussion. Adding some notes from the meeting:

  • Motivation
    • Share setup and cleanup
    • Support scripts from external repositories as well?
      • Would need implement fetching remote repos
      • And also installing the possible requires
      • Recently brought by Jiri Jaburek
    • Better than having to manually set the test script
      • test;: ./setup.sh && ./test.sh && ./cleanup.sh
      • For local use case without external deps this already works, but it’s ugly :)
  • Benefit
    • Simple way to share setup/cleanup across multiple tests
    • Special handling of the phases? E.g. failing setup would result in error
    • Inheritance for many tests sharing the same setup and cleanup
  • Possibly related to DDT (Dependency Driven Testing)

@adiosnb
Copy link
Collaborator

adiosnb commented Aug 10, 2023

@jscotka Considering various complications in remote linked tests, I propose splitting the solution into two PRs: #1966 with the local scripts and another with the remote tests.

What do you think about that and also about the implementation of #1966?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants