-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests #33
Comments
Sorry for simply pushing to master. I meant to publish the tests as a branch first. Anyway, 3185d2f adds some tests. |
I only implemented tests for SoundFile so far. I'll still work on testing all the global functions. |
Are the tests supposed to be executed with the following?
I have no experience with testing in Python but I keep reading about nose and py.test ... what about using one of those? I also read about tox, which seems to be useful for testing on different versions of the interpreter ... |
I use |
Cool, thanks! I also tried it with |
I think we should structure the tests differently: We could use This would be a shift from unit tests to integration tests, which I think is a better fit for PySoundFile. We can of course make some individual unit tests where necessary, but I don't think we need it for all functions/methods. |
I don't think it matters much if we test Feel free to write additional tests for the functions though. |
I think it does matter a lot, because we simply can't infer anything about the functions by testing the methods, because they are not even called.
I don't think that would be the right thing to do, because it would repeat code, leading to an unnecessary maintenance burden. |
Nope. Unit tests should cover the basic units. The smallest testable units we have are the methods. The bulk of our tests should test the methods. Once we know that the methods are correct, the functions will be correct implicitly. Sorry to be so blunt about this, but the point of unit tests is to test the basic building blocks. |
Yes. I think it makes hugely more sense to completely test the functions and then only test the remaining features on the methods. In the end we should have tested everything, of course.
I don't get it. Why? How?
If have no problem with the bluntness. |
Why? You haven't given a reason yet. The point of unit tests is to test the basic bulding blocks, and then build on that. If you know that your basic building blocks work as intended, it is easy to reason about errors in higher-level abstractions. This doesn't work the other way. A quick round of googling:
They all say that you should unit test first. After that, add all the integration tests you like. Also, they all harp on how useful unit tests are during the design process. I wholeheartedly agree, from my own experience in a few projects. Integration tests don't have that property. One article also says that unit tests should not involve the file system. I'm willing to talk about this, maybe we can find a way to refactor the existing unit tests so they don't involve the file system. But as for testing the functions and not the methods, you are wrong. Sorry. |
Matching pull request in #40. |
Sorry, I thought I did: repetition. If we write a number of tests for the methods and then write the same tests for the functions that's twice the work for no gain. I don't want to discuss further if our tests are or should be unit tests or rather integration tests because those are just names and nobody knows what they mean anyway. At least everyone is contradicting each other (and sometimes themselves) in their definitions, which are mostly very vague anyway. I'd rather give a concrete example: I think one of the many things we should test is reading a given number of frames from a given file or file-like object and checking if the resulting NumPy array is exactly as we'd expect. Case 1: How would we do this using We cannot read a file without opening it, so we'll have to write a test fixture which opens the file for us. Then we write the tests which use this fixture. When a test fails, it will either fail in the fixture or in our test using Once this test passes, we know that the fixture and Case 2: How would we do this using We wouldn't need a fixture for opening the file, because that's done internally, so we'll end up with less test code. When a test fails, it will either fail in Once this test passes, we know that Summary: I must admit there is the theoretical possibility that there is a bug in Long story short, that's why I'm suggesting that we should test the overlapping functionality of the |
Maybe there is a misunderstanding then. I would not propose to test everything for both We indeed can not read a file without opening it. Thus, we will have to instanciate a Instanciating the object explicitly has the main advantage of being able to test Here's an example: Say we only test the functions. This will mean that we have a very small number of tests classes, with a very big number of test methods. Every error will be attributed to the functions. Say we only test the methods instead. This will mean that we have a bigger number of test classes, with a smaller number of tests methods each. Errors will be attributed to the methods that actually caused the errors, not higher-level wrappers. You really are genuinely wrong about this. I have done a few projects with tests. High-level tests are nice. They are a form of documentation. But the important thing is to test the smallest possible parts. This forces you to think about the minute details of your program's inner workings. Often times, a program works fine in broad strokes, but there are edge cases and unintended side effects that you don't notice in day-to-day work. This is what testing is all about: You try to discover the edge cases of your innermost secrets, so they won't blow up in your face when some other part of the program uses them later. Of course, none of this has anything to do with #44, which is entirely valid at any rate. |
Probably. I hope we can clear that up. Just to be absolutely clear: I'm talking only about the overlapping functionality between methods and functions.
Maybe that's the difference in our views. I'd propose to test both.
That sounds a bit sloppy, but I guess it depends on how this is actually implemented.
Well here's another difference.
True. But the test framework doesn't know that.
Here we also disagree.
But it's just not possible to do them really isolated. In both cases the exact same steps have to be taken, just in my suggested scenario we have to write much less fixture code (with its maintenance cost and potential bugs). We can try to test the underscore methods, but I suspect this could be quite impractical because even more additional setup code would be necessary. I suspect that it might be more appropriate to design the test cases for the public methods/functions in a way that we can be sure that all code paths that we care about are actually executed.
Is this part of the argument?
Sure, but we can read the backtrace.
True. But the error might actually originate from somewhere else, e.g. the test fixture. So we still have to read exactly the same backtrace.
Probably. I've been wrong before ...
I agree. But as I'm trying to say, in our case the smallest possible part is, e.g., "open + read". We can test "open" first, but that doesn't change anything in our discussion.
I totally agree with your general statements. But the reality in our case is that f = SoundFile('test.wav')
data = f.read() is on the same level of abstraction as data = sf.read('test.wav') There is nothing broad-strokier in one or the other. There may be unintended side effects in the method, which we would be made aware of in both cases. But probably my mistake is somewhere here ... |
This is probably our main point of contention here. Everything else follows from this. If we try to test only the basic building blocks and trust that rest is correct implicitly, then testing at a method level makes sense, as they are the building blocks everything is built from. If we test everything anyway (including duplicated functionality) then there is no real difference to where we start and we might as well start at the function level if that requires less code. Let's back up one step though. We should try to test (more or less) all code paths. I would argue that is rather easy to see all the code paths in, say, By similar arguing, I would not test the particulars of This is way of testing every possible code path with as few tests as possible. I don't see any way of achieving the same thing from testing only the high-level functions. Also, it is much harder to reliably find all the code paths in |
I agree that our goal should be to check as many code paths as possible (ideally all). I think we should decide on a case-by-case basis if it's worth to make a separate test for an underscore-prefixed helper function or not. |
Now that we are changing so many things, we should make sure that we don't break stuff.
The text was updated successfully, but these errors were encountered: