Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
#summary The original post in developer mailing list written as wiki page for all the community
This post will try to resume how my thoughts about TestedBy and more specifically a new approach to unit tests have changed (evolved) during last year and last months of active development on this project. Why isn't it a blog post as follow up of that one? It would have more visibility, and maybe more interests and comments. The answer is that isn't YET a blog post, because I'd like to share my thoughts with the community before make it our "official" position in the blog sphere. Oki, let's go.
I hope you well remember my "a new approach to unit test" blog post that have been the genesis of this project. No? Read it before go on: http://www.javalinux.it/wordpress/?p=116
You know TestedBy has changed a bit in this months (last 2 or 3 with an active development), not only in code but also in some ideas behind.
I've tried to keep track of my ideas in a mind map. You find it attached to this message.
Generally speaking what I've depicted in my old blog post was a system to keep annotations in classes under test pointing test classes (or in some case to specific test methods). Advantage of this approach was mainly 2:
- Design By Contract (viewing test as contract definition). The sugar here was the opportunity to put TestedBy annotation also on super classes (even if they are abstract or even interfaces) and inherit test annotations.
- Run only tests stressing a specific class. IOW run test stressing a class of your interest (i.e just compiled) giving you the confidence you aren't breaking test suite without running the entire suite itself.
The main concerns about this approach was about code cluttering putting a lot of annotations in production code (of course you could have a lot of test stressing a class or even a method).
So I've (as you know discussing it also with some of you) changed a bit the focus from an annotation centric approach to a more general metadata approach. The idea is to collect a set of metadata that represent link between class under test and test classes (and eventually more fine grained links between methods) and use them for a lot of purpose. The pluses of this approach are mainly two:
- Metadata can be collected from different sources: annotations (with some syntax sugar introduced), instrumentation of test during test execution, maybe from a dedicated user interface.
- Metadata can be serialized and/or used in a second step for various goals: run "right" tests of course will remain a central feature, but also "graphical" representation, code navigation in IDE, more dynamic use of it (see the mind map and below for a detailed description of ideas depicted there)
Let me try to walk through my mind map and define some important concept. First of al how metadata can be collected? There are two main different approach:
- Metadata are defined by the user.
- User can define class under test and tests using metadata in his own code (the original idea). It's already implemented and I've tried to make some syntax sugar (i.e. if test class is in the same package of class under test you just need class name). As said in the original post annotations are pure strings to don't create dependencies of code under test from test code at least at compile time. As said it could be useful to define contract of class under test considering also the opportunity to define test stressing classes and interface in an hierarchical manner (i.e. a test define on an interface will stress all classes implementing it) *User can define metadatas with a dedicated editor, maybe part of our future eclipse plugin. No code exist about that, it's just an idea. Someone interested for a contribution?
- Metadata are collected by instrumentation. It's one of the big news and difference from original blog post idea. Metadata are automatically collected during test running with an approach very similar to test coverage tool. We have already implemented this big feature. Alessio would you like to explain that better to the community? A feature that I'd like to see and not yet implemented is the hierarchical approach used in annotations (here we have to get an opposite approach: if a test is stressing all classes in a hierarchy it is in fact defining a test, or contract if you prefer, on the upper most class/interface of that hierarchy)
Second important concept we can identify in my mind map related to test metadata is "hierarchy of tests". What I mean with that? The opportunity to keep track of test stressing an hierarchy of classes, or IOW the opportunity to define contract on upper most type (class or interface) in an hierarchy of types. It's a quite innovative concept applied to tests and in fact my first and central idea when I've written my original blog post. What I love of this idea is the opportunity to add contract (or if you prefer to express behaviour) in class and methods in java interface. It's a matter of fact that java interface creator have poor opportunity to define expected behaviours of implemented method. In fact they can just define formally type boundaries of input and output, and typically express in javadocs the expected behaviours, but without any formal verification. A lot of sample of this could be identified in JDK api, just think about comparable interface or Collection interface. Why don't express the same thing in more formal format that can be used to verify the correctness of implementations? I think that some complex rules defined in javadocs would be defined in a formal manner (code) and much more important in some way verified on each implementations. isn't it? Ok, it's the reason why a test hierarchy could be cool, but what does it means in metadata collection and use? Returning to my mind map, both methods to collect metadata (by user or instrumentation) should keep track of possible hierarchy. Of course from a different point of view: user defined metadata will be probably defined on upper most type is possible fixing contract for that type (with the goal described before), while instrumented ones are collected of course bottom up and moved upper in the type hierarchy when Testsedby will identify a test stressing all the leaf of a class hierarchy. In the second case we could not use it as contract, but I think it could be very useful to give this kind of evidence to our users permitting them to identify patterns in their code and refatoring it extracting higher level contract, exactly like they are extracting interface when common types definition are identified. One more time helping them in our eclipse plugin would be cool too. Last thing about "hierarchy of tests" to remember is that it could be useful if and only if we can keep track of concrete class under test and so provide APIs to define test in a generic manner (it could be also useful for the advanced concept of "Generic Tests Injection" on which I'm working in these days and I'll introduce below). Let me explain with an example (Note there how IunderTest instance are created in TestClass):
In this example there is also an introduction of "Generic Tests Injection" with NullVerifier.java. Ignore it for the moment and have a look at it after my explaination of the idea below.
Other concepts related to test metadata in my mind map are: "Navigate through", "Graphical representation", "Keep Track of Test results". I think they don't need comments, because they have quite clear goals. Isn't it? Some words can be spent on eclipse plugin. It would merit a dedicated mind map and discussion because my ideas on this point aren't so well defined. Anyway let me try to describe the most useful function I can identify for it:
- It could be used to navigate through test metadata. I'm imagining something like a right click in the code and a menu item saying "tests stressing this code" opening a popup with all tests stressing this code. Of course a click on specific test open an editor on that one.
- A graphical representation of test metadata (IOW links between classes under test and test classes). A representation of test hierarchy would be cool too.
- Of course it should keep track of failed tests and give evidence of it with a warning, or much better dedicated icons like findbugs ones.
- A graphical editor of test metadata. A tool to give user the opportunity to define his own metadata without annotations.
- It should launch the TestedBy runner when user save a class. In this way we will have that a modified class will be compiled and tested at every modification. COOOOL!
I'd like to spend some words also for the concept of "Dynamic verify mock". If you know mockito (of course it shoulden't be the only mock framework supported, but one of supported framework for sure and maybe the first one) you know that you can define a mock and its behaviour in this manner:
TypeOne mockOne = mock(TypeOne.class); when(mockOne.methodOne).thenReturn(null);
But the question is: am I mocking a valid behaviour? IOW TestedBy metadata could be used to dynamic verify if mocked object doesn't break contract. How? running tests defined on mocked class on mock itself. Ok, to do this we have to do a lot of things like intercepting mockito object, extracting it and running right test on them before consider to run the real test for which the mock have been defined. I'm not saying it's easy...I'm saying it's cool :P
No comment about test runner and maven/ant plugin. Mind map is in my opinion sufficient to explain all my thoughts about. Of course feel free to ask questions.
The last concept I've put in my mind map is "generic tests injection". What have I in mind? Since we have to keep track of currently executed test class and class under test (and we can also keep track also of method under test) why don't define a "generic test" defining common controls like ShouldReturnNull, shouldNotAcceptNullParameters and so on? It's an idea expressed a lot of time ago by John here on the mailing list that I'm trying to review and expand. What we need is some generic tests that can generate (or in some manner mocks) inputs for any methods and run that. TestedBy metadata (user defined by annotation or user interface in this case of course) will "assign" this test to some class/method under test. TestedBy test runner will be responsible to instance class under test and run right test on its methods linked to this generic tests. It should be some kind of extension of what runner have already to do to run test against class hierarchy. Of course a lot of thoughts and tests are still needed but I can see a number of pluses in this new feature for TestedBy:
- We can try to build a real community contributing with open generic tests. Of course generic tests should be easy to write (it's our work to define great apis) and it would be cool if they support different language. In particular should be great to support at least one dynamic languages (let me say groovy), one functional language (let me say scala) and one rule language (let me say drools)...not only because each of them has its own particularitis, but also because buzz words helps making communities :P
- It could evolve supporting full BDD approach
- define generic contract is a great plus and difference from other tools
- Could support some other cool tool in mocking or generic test status (i.e byteman from jboss)
Any comment? Note we need help in all areas:
- First of all complete the current implementation and go for a first release. I'll make a post tonight with the minimum requiremnt to get a release IMHO.
- Ideas, discussion, concerns in all areas and mainly in "Generic Tests Injection"
- Help, help, and one more time help for eclipse plugin.
I'll look forward for any comment.