Skip to content
This repository has been archived by the owner on Dec 24, 2023. It is now read-only.

Unit testing and platform testing #5

Open
cilki opened this issue Sep 14, 2018 · 7 comments
Open

Unit testing and platform testing #5

cilki opened this issue Sep 14, 2018 · 7 comments

Comments

@cilki
Copy link
Collaborator

cilki commented Sep 14, 2018

Like every library, OSHI needs a solid set of unit tests for the api. With the Driver design pattern, a mock driver can be created that returns configurable values with configurable timings. This should be sufficient for thoroughly testing the api and caching layers.

Testing the drivers (platform testing) will require some additional infrastructure. Travis CI isn't a good fit for this kind of testing because they only have a few images. The platform tests should ideally be run on every architecture of every major OS family and distribution. So that's like 60+ (virtual) machines at least. This is likely to take a long time which is another reason why platform testing should be separate from unit testing.

Docker could reduce the burden for Linux testing, but the sandbox effect may also reduce the benefit of testing on a container. The same could also be said (to a lesser degree I think) for virtual machines.

Most projects wouldn't bother with testing on so many platforms, but I argue that this is important for OSHI considering its purpose.

@dbwiddis
Copy link
Member

I’m all for platform testing but don’t know a cheap/free way to do it. “mvn test” on all my VMs works now....

@cilki
Copy link
Collaborator Author

cilki commented Sep 15, 2018

One thing we have to establish is whether it is meaningful to run platform tests on VMs or containers. I think a Driver that is tested on a bare metal machine would also work on a VM, but I'm not so sure the other way around.

@dbwiddis
Copy link
Member

dbwiddis commented Sep 17, 2018

VMs, I believe, attempt to emulate bare metal as much as they can. Can't speak for containers. But there are corner cases that don't get tested, like oshi/oshi#620 would never have been found on any "spin up" testing environment. I don't think this is as critical, though... we don't change things often, so when they're written once and tested thoroughly enough, they should be fine.

@dbwiddis dbwiddis transferred this issue from oshi/oshi Jan 4, 2019
@YoshiEnVerde
Copy link
Collaborator

Without money involved, we might have to depend on contributors for platform testing.

I wouldn't want to burden anybody with having to test all drivers in an ever growing list of supported architectures every time we do any fix or update to them.

We'll always have most WIndows and *nix architectures covered (both bare metal and virtualized), but we'll struggle with the more rare/enterprise architectures, like Solaris.

@YoshiEnVerde
Copy link
Collaborator

If we can build some easy/cheap way to run some kind of test kit for the drivers of a specific architecture, we could ask people to attach a test report to any issue raised?

I'm thinking something that might not take more than a single jar download and 10~15 minutes of execution to produce a report.

Maybe a simple piece of code that just instances OSHI, identifies the architecture, then runs through every single method available for that arch?
It would then build a simple checklist of every call, with a simple passed, or a basic description of the failure.
It wouldn't cover every test case, of course, but we'd at least have a general positive test of each driver in use...

@cilki
Copy link
Collaborator Author

cilki commented Jan 4, 2019

I like the idea, but we want to know about more than just the failures. A driver may also return an incorrect result. To detect that, either the user enters their system information manually from a known source (hopefully not from OSHI itself) or the reporting application calculates it and compares on the fly.

The first option sounds much better because of the maintenance burden the second would create.

Seems like there should be something already out there that does this. It needs to be serverless and must produce a report that can be included in a gist or issue.

@dbwiddis
Copy link
Member

dbwiddis commented Jan 4, 2019

A common question I have of users reporting a bug that is WMI based is to give me the output of their wmic command line equivalent. I'd like that for the "WMI" drivers, at least... in general, a lot of the "native" information we fetch has command line equivalents that are useful for debug.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants