-
-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: Add support for the WiPy in run-tests script. #1348
Conversation
My test setup with Jenkins is running on a Raspberry Pi 2, and I only see the issue there. |
This is ok, it's functional. Unfortunate though that the "port" option could be confused with serial port (which is specified by the "device" option). |
Good point! What if we rename it to |
|
|
125320c
to
ce172d4
Compare
Changed to |
The --pyboard param has been replaced by --target which defaults to 'unix'. Possible values at this moment are 'unix', 'pyboard' and 'wipy'. Now is also possible to select the baud rate of the serial device when calling the script.
The more general way to solve this problem is to automatically detect the features of the target uPy (eg floating point or not) and then only run the relevant tests. This is already done to some extent where it detects if certain modules or methods exist or not (using the print("SKIP") mechanism). And then there are only 2 options to run the test: either you run a binary executable on the local machine, or you use pyboard.py to run uPy remotely. This way would keep the current behaviour, where "./run-tests" runs locally and "./run-tests --pyboard" runs remotely via pyboard.py. With proper feature testing, ports like esp8266 and pic16bit would also gain a test suite without any extra effort (or any specific cases to handle these ports). But, adding feature testing is some effort. Especially something like the existence of stack checking is hard to detect. |
Yes, it's way more generic, and seems nice to let the test framework decide by itself what to test and what not to, but, I don't think this is a good idea. Imagine that someone commits a patch that by mistake disables certain features in your port (like floating point), or breaks some other functionality for which you have a "smart" way to detect if it should be tested or not, such smart mechanism decides to skip the test, and at the end of the day you keep seeing green balls in your test results. Yes, you can see at the end of the log which tests were skipped, but, one of the reasons to automate testing is to not to have to check the results manually. So, the feature is broken, but nobody knows it until eventually somebody decides to check the logs (looking for something else) and notices that a bunch of tests are being skipped, or worse, you release your software and soon after that, bug reports come in. With the proposed patch, to run tests locally you can still do Maybe is a good idea to be able to detect the target board automatically, but, is it really worth the effort? how much value does it add? |
Just 2 cents guys: I agree that this patch doesn't scale, I also prefer to make tests auto-skipping whenever possible. But autoskipping isn't always possible, so manual filter lists will need to be maintained. I also understand that this patch is @danicampora trying to get quality baseline for his product, while working on myriad of other things. So, I don't think it's fair to ask him to re-do "much better" than pyboard had for example.
To answer this your concern however, sudden raise in number of skipped tests is an error too. There're tools which try to automate that even, e.g. testr for Python. I hate these tools, because they try to burden a programmer with a job of test engineer, and that's indeed a separate role, requiring full-time for big projects. Hopefully we don't have such a big project, and can KISS our testing approach (evolving it is good of course, but as gradual easy process, not as burden). |
Yes, testing is indeed a full time job.
I fully agree, that's why I wrote that "tests should only be skipped for very good reasons". As I see it, we have categories, which are defined by the folder names (e.g. basics, float, misc, etc). We can skip by categories, which is OK, if a port doesn't support a feature set. But, within categories, the number of skipped tests should be low.
What part doesn't scale? Having to maintain the list of skipped tests manually? then again, we should try to keep the number of skipped tests very low. Maybe we could move the list of skipped tests to a separate file, which will help make the |
Yep, and also maybe way to select which set to use. But not that important now.
Let's keep it simple, merge as is (I assume you guys did homework on that --target flag), and move to something more interesting like #876. |
If by homework you mean testing that it works as expected, yes it does ;-)
I am already there :-) |
Fix Driver RTD Links
The --pyboard param has been replaced by --port which defaults to
'unix'. Possible values at this moment are 'unix', 'pyboard' and
'wipy'. Now is also possible to select the baud rate of the serial
device when calling the script.
For some reason some tests like
int_big_div.py
fail because pyserial looses data received from the WiPy. Running the test manually via Putty gives correct results, also via Telnet, so I discard any issue with the WiPy itself, therefore, the problem seems to be caused by the driver, pyserial or by pyboard.py. Anyhow, reducing the baudrate to 57600 makes it go away, and IMHO this is OK for now.