-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test repository test Compilebench.test throws RuntimeError #2286
Comments
That test is "community maintained". Please report the issue in their GitHub page. But wait, did you see the "[Errno 28] No space left on device" message? The test PASSed on my machine. |
Yes, the log file contains |
I'm pointing that. Maybe you have to check your system? |
:) Do you know how much free space is required on the device? Dependent on that the test could be unsuitable anyway... BTW: Is there a quick way to investigate the documentation of the tests? |
Sorry, I don't.
There's no such documentation. But just blindly run all the tests from that generic/public test repository makes no much sense for me. They are pretty much intended to serve as reference, so you can customize them for your needs or use them as examples to create your own tests. Ok, if you see something to improve, you should send a Pull Request there, but I doubt they will simply PASS out-of-the-box on every single system out there. |
Having a look into the source code is totally fine for me. (Python is usually self-documenting enough to figure out what the code does without having comments or docstrings.)
You are right. But as I run the tests on an embedded device which runs Ubuntu I was not sure about the relevance/suitability of all tests. Running all tests and analysing potentially cancelling/failing tests seemed reasonable to me. (E.g. the memory on embedded linux devices is a lot more limited than on server machines. Means either I would have to configure the max. memory "consumption" of the test somehow or I have to skip it.) |
Ok, cool. For the tests you want to skip, you can decorate them with a skip decorator: http://avocado-framework.readthedocs.io/en/55.0/WritingTests.html#avocado-skip-decorators. If you do that conditionally, i.e. for embedded systems, you can send a pull request with your changes so next person running those tests on embedded devices will have them skipped as well. Here an example: https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/generic/ras_extended.py#L50 |
|
Closed due to move of issue into avocado-misc-tests repository. |
If the test
Compilebench.test
(located in~/avocado-misc-tests-master/perf/compilebench.py
) is executed when running the performance tests withavocado run ~/avocado-misc-tests-master/perf
(I use avocado v55.0 from PyPi) it throws a RuntimeError. The following tests in the test suite are executed.The text was updated successfully, but these errors were encountered: