New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concept listener start_suite hook isnt correct. #3089
Comments
The description is pretty confusing with various different topics. I'm not sure did I understand it fully, but here are some comments related to it:
Could you clarify was there something that I missed? |
Best variant look like:
Fixtures (setup/down, start/end functions and etc) and running collecting process - thats diffrent things and hooks must be different, for their. |
Could you clarify why separate hooks would be needed for setups/teardowns and why existing |
Because the setups/teardowns process is an independent step in the execution of tests, and should not depend on the tests. |
Are you aware that |
Pls, look at screenshot, keyword-fixtures called after suite_start.
The problem is that different things are merged into one hook. Thats true for 2 and 3 API version. For exaple: |
Are you saying |
Hi
Recently i started to write someone listeners, for data-driven test, and faced with some concept error in hook processing.
First of all let show tests livecycle:
Obviously, we will see such a picture if we make a minimal listener and use one test.
Thats scheme has problems in any suite setups processing, because start_suite was run before any keyword call, and we havent possibe sets vars inside our listener from suites setups for each suite.
Moreover, tearUp/tearDown functions (Suite Setup/Suite Teardown) directly not related with tests and testing-run time. They are a separate stage, and must stoped all tests in suite and skip their, if somethiing was wrong (for example, we cant conect to DB or open file - why we continue running in that suite?)
Parametrize (templating)
If we look at our log file where a parameterized test was used, we will see that the test actually did not become parameterized, but only began to contain many key-words, and if one or more errors occurred inside the parameters, the report will contain only one mistake for this test, which is essentially not true.
Conclusion (as for me)
Listener
`import json
from robot.api import logger
class My:
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LISTENER_API_VERSION = 2
Testcase:
*** Settings ***
Documentation Suite description
Suite Setup Settings Current Suite
*** Test Case ***
Test One
[Tags] Compare
Count
Count
Count
*** Keywords ***
Settings Current Suite
Set Reference Dir referencedir
Set Section Test
working screen (on screen runed two same test-cases)
The text was updated successfully, but these errors were encountered: