-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing and coverage - Instruments #246
Comments
You're comments line up pretty much with reality. The last time I made a huge test coverage push I was working forward I was working my way through the coveralls.io reports. And here's the history: A lot of the instrument drivers were written before we had test coverage requirements. Then I did what you already suggested; the tests were written assuming the drivers work 😆 and were not done to exhaustive test all the code paths. They should, but don't. Probably for my sanity at the time when I was trying to add as much coverage as I could when I was already spending dozens of hours just writing tests! I've learned a lot about testing over the years, and of course if I could do things differently I would. For the core, I've been leaning on One thing I'd ideally like is to have a tool that can run every (relevant) property of a driver against actual hardware, record the IO, and save all the relevant info in a blob of some form. Then, unit tests could pick up on those files, and use it to populate a test suite to test all the generic positive cases, like we do right now. Then, we'd only have to be writing tests by hand for edge and negative cases. |
Thanks for the history! Was it in fact helpful for future development to go through individual instruments, assume they work, and write tests without the hardware? If so, it might be worth to put together a simple todo list and keep on checking instruments off. For my part I'd be happy to contribute tests if it's useful for the project. I'm by far less experienced with this, so it would also be a learning opportunity. The question remains (assuming the answer to the usefulness is yes): What would the order of most helpful to less helpful be? For the core: Using Finally, your idea with the tool to run cases against hardware and automatically build a test suite for all the generic cases sounds really interesting. Do you have an idea on how the initial case would be generated? One way might be, maybe a bit to naively, to use the docstring examples directly. For example, |
Oh yes, its caught lots of problems. The biggest ones being the current effort to move to I'm not 100% sure if a doc test would work for this just because you would need to store in the docstring the exact data that the instrument is sending to the computer, even if its not in the final human readable form. I think doctests are great and might be helpful for IK to have, just not a solution to this particular one. My initial thoughts on how to do this might involve a new metaclass attached to Then, another tool(s) can be used to consume those files and either populate them with content (by running against actual hardware), or run the tests using the captured data. We already support loading drivers based off entries in a yaml file, so that's at least one part checked off. |
Could pyvisa-sim be useful here for simulated drivers? |
RE: pytest-recording, that could be useful for more complicated instruments. We actually already have a spot where you can (optionally, with some missing documentation) send to the logger all the IO at the boundary of IK (https://github.com/Galvant/InstrumentKit/blob/master/instruments/abstract_instruments/comm/abstract_comm.py#L217). I sure this could be adapted to capture the IO. Then with that, we can parse it to match what we already do for out unit tests with |
That's good to hear and should probably justify for another push to increase coverage and get more testing in. I went with coverage through all instruments and put together the following task list. It looks pretty intense, however, a lot of instruments have very minor issues that would not influence something like the Let me know what you think of the list, I'd be happy to spend some time on integrating more instrument tests under the assumptions that it all works. These would either way be required, even when a recording system or similar would be in place, since that would likely require the hardware to be present, correct? Here's the list: Instruments without tests
Utilities files without full testing
Instruments with major test issues
Instruments with minor test issues
Instruments with very minor test issues
|
Love it!! Thanks @trappitsch When I get a moment I'll fix #248 which will help with this |
Sounds good, thanks! I'll start working on new classes then and update the list accordingly, i.e., add my name to what I'm currently working on and a checkmark once it's completed and merged. I assume there won't be too many volunteers grabbing cases to write tests for that we have to move it out of this thread for managing on who's doing what. Does that sound reasonable? Priorities: I guess top to bottom makes the most sense in this case. |
I don't think any particular order is important. Its not like any one instrument is more important than another for this. I originally just went in alphanumeric order and skipped over ones that were going to be a pita at the time. Getting in the groove and crunching it out without demotivation is more important. |
I'll be adding some coverage for the MC1 in my migration to pint branch, since coveralls seems to think that the coverage has decreased from my changes |
Just looked through the coverage reports again and indeed, all physical instruments seem to be fully tested now. This should hopefully help a bit with developments of further milestones and changes. That being said: there is still more to be done 😃 Looks like the next part would be the abstract instruments and the generic_scpi instruments. This is closer to the core now though... what do you think @scasagrande? What would be the logical next step and what are you looking for - what is giving you the biggest headache in your current developments? |
That's a good question. Other than the remaining test coverage that you mentioned, another todo is to take a look at the docs. The badge says passing, but the link also says it hasn't been built in a few years. Whoops. Something clearly broke in the autobuild pipeline a while ago. Maybe we should build something into CI to test that the docs build? |
Oh I never noticed, but yes, that's a good point, the docs might deserve an update :) They still build, I built them recently offline just to make sure I got the re-Text syntax right for including the Blu class. I suspect it's a setting on readthedocs, could you have a look there if there are some failed builds, etc? |
With the recent preparations by @scasagrande in PR #243 to replace quantities with pint, tests and what they cover has moved more to the forefront of my thinking. I have been running some coverage analysis of the whole instruments package and looked through some classes. Some conclusions:
visa_communicator.py
file does not seem to have tests at all. Most parts are however fairly well covered. Some improvements could surely be made with more test cases.glassmanfr.py
agilent34410a.py
.newportesp301.py
. While a test file exists, it only seems to contain one individual test, most of the functionality is however not covered.How to improve on the testing cases is tricky, but can be done. The core base is probably the most important one that should be checked with a coverage analysis.
Improving tests for instruments: Such tests can written without the physical instrument being available if the instrument works as expected with the current implementation of IK. The latter however is important to ensure prior to writing a test, in order to no end up with a test suite that passes for failing / non-working sections. Are there any instruments that are currently known to be "not working" for some reason? The source code often has comments with todo and fixme remarks indicating additional checking and maybe problems. Tests should be written with this in mind.
Surely, having the physical device available and testing the code along with writing the test would be the ideal case scenario. However, that is not likely to be feasible for most.
I thought I'll put this up here for discussion. Maybe we can come up with a way to improve feature implementations and ongoing enhancements and changes (#244, #243, ...), as well as future ones.
The text was updated successfully, but these errors were encountered: