Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: Add support for the WiPy in run-tests script. #1348

Closed
wants to merge 1 commit into from

Conversation

danicampora
Copy link
Member

The --pyboard param has been replaced by --port which defaults to
'unix'. Possible values at this moment are 'unix', 'pyboard' and
'wipy'. Now is also possible to select the baud rate of the serial
device when calling the script.

For some reason some tests like int_big_div.py fail because pyserial looses data received from the WiPy. Running the test manually via Putty gives correct results, also via Telnet, so I discard any issue with the WiPy itself, therefore, the problem seems to be caused by the driver, pyserial or by pyboard.py. Anyhow, reducing the baudrate to 57600 makes it go away, and IMHO this is OK for now.

@danicampora
Copy link
Member Author

My test setup with Jenkins is running on a Raspberry Pi 2, and I only see the issue there.
The problem appears to be caused by the FTDI adapter drivers that ship with linux. FTDI itself provides other drivers that are supposed to work better but those are meant for the original Raspberry Pi, not sure if they will work on the newer model, I'll try it...

@dpgeorge
Copy link
Member

This is ok, it's functional. Unfortunate though that the "port" option could be confused with serial port (which is specified by the "device" option).

@danicampora
Copy link
Member Author

Good point! What if we rename it to micropy_port or similar?

@dpgeorge
Copy link
Member

micropy_port is too long, maybe --target?. Could use instead --wipy and keep --pyboard, but that doesn't scale well to other ports.

@danicampora
Copy link
Member Author

--target sounds good :-)

@danicampora danicampora force-pushed the wipy-tests branch 2 times, most recently from 125320c to ce172d4 Compare June 24, 2015 17:19
@danicampora
Copy link
Member Author

Changed to --target ;-)

The --pyboard param has been replaced by --target which defaults to
'unix'. Possible values at this moment are 'unix', 'pyboard' and
'wipy'. Now is also possible to select the baud rate of the serial
device when calling the script.
@dpgeorge
Copy link
Member

The more general way to solve this problem is to automatically detect the features of the target uPy (eg floating point or not) and then only run the relevant tests. This is already done to some extent where it detects if certain modules or methods exist or not (using the print("SKIP") mechanism).

And then there are only 2 options to run the test: either you run a binary executable on the local machine, or you use pyboard.py to run uPy remotely. This way would keep the current behaviour, where "./run-tests" runs locally and "./run-tests --pyboard" runs remotely via pyboard.py. With proper feature testing, ports like esp8266 and pic16bit would also gain a test suite without any extra effort (or any specific cases to handle these ports).

But, adding feature testing is some effort. Especially something like the existence of stack checking is hard to detect.

@danicampora
Copy link
Member Author

The more general way to solve this problem is to automatically detect the features of the target uPy (eg floating point or not) and then only run the relevant tests. This is already done to some extent where it detects if certain modules or methods exist or not (using the print("SKIP") mechanism).

Yes, it's way more generic, and seems nice to let the test framework decide by itself what to test and what not to, but, I don't think this is a good idea. Imagine that someone commits a patch that by mistake disables certain features in your port (like floating point), or breaks some other functionality for which you have a "smart" way to detect if it should be tested or not, such smart mechanism decides to skip the test, and at the end of the day you keep seeing green balls in your test results. Yes, you can see at the end of the log which tests were skipped, but, one of the reasons to automate testing is to not to have to check the results manually. So, the feature is broken, but nobody knows it until eventually somebody decides to check the logs (looking for something else) and notices that a bunch of tests are being skipped, or worse, you release your software and soon after that, bug reports come in.
In my experience, tests should only be skipped for very good reasons, and should be done explicitly with a clear comment.

With the proposed patch, to run tests locally you can still do ./run-tests, and to run on the pyboard you do ./run-tests --target pyboard, is that not nice enough ;-)?

Maybe is a good idea to be able to detect the target board automatically, but, is it really worth the effort? how much value does it add?

@pfalcon
Copy link
Contributor

pfalcon commented Jun 25, 2015

Just 2 cents guys: I agree that this patch doesn't scale, I also prefer to make tests auto-skipping whenever possible. But autoskipping isn't always possible, so manual filter lists will need to be maintained. I also understand that this patch is @danicampora trying to get quality baseline for his product, while working on myriad of other things. So, I don't think it's fair to ask him to re-do "much better" than pyboard had for example.

Imagine that someone commits a patch that by mistake disables certain features in your port (like floating point), or breaks some other functionality for which you have a "smart" way to detect if it should be tested or not, such smart mechanism decides to skip the test

To answer this your concern however, sudden raise in number of skipped tests is an error too. There're tools which try to automate that even, e.g. testr for Python. I hate these tools, because they try to burden a programmer with a job of test engineer, and that's indeed a separate role, requiring full-time for big projects. Hopefully we don't have such a big project, and can KISS our testing approach (evolving it is good of course, but as gradual easy process, not as burden).

@danicampora
Copy link
Member Author

I hate these tools, because they try to burden a programmer with a job of test engineer, and that's indeed a separate role, requiring full-time for big projects

Yes, testing is indeed a full time job.

To answer this your concern however, sudden raise in number of skipped tests is an error too

I fully agree, that's why I wrote that "tests should only be skipped for very good reasons". As I see it, we have categories, which are defined by the folder names (e.g. basics, float, misc, etc). We can skip by categories, which is OK, if a port doesn't support a feature set. But, within categories, the number of skipped tests should be low.

Just 2 cents guys: I agree that this patch doesn't scale

What part doesn't scale? Having to maintain the list of skipped tests manually? then again, we should try to keep the number of skipped tests very low. Maybe we could move the list of skipped tests to a separate file, which will help make the run-test script cleaner.

@pfalcon
Copy link
Contributor

pfalcon commented Jun 25, 2015

Having to maintain the list of skipped tests manually?

Yep, and also maybe way to select which set to use. But not that important now.

Maybe we could move the list of skipped tests to a separate file, which will help make the run-test script cleaner.

Let's keep it simple, merge as is (I assume you guys did homework on that --target flag), and move to something more interesting like #876.

@danicampora
Copy link
Member Author

I assume you guys did homework on that --target flag

If by homework you mean testing that it works as expected, yes it does ;-)

and move to something more interesting like #876.

I am already there :-)

@dpgeorge
Copy link
Member

Merged in 186b355. json float tests split out in e44c1d3.

@dpgeorge dpgeorge closed this Jun 25, 2015
@danicampora
Copy link
Member Author

Merged in 186b355. json float tests split out in e44c1d3.

Thanks, I was going to submit another PR for exactly the same thing :-)

@danicampora danicampora deleted the wipy-tests branch July 28, 2015 16:09
tannewt pushed a commit to tannewt/circuitpython that referenced this pull request Nov 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants