A very tiny testing framework inspired by minitest/minitest.cr.
- This framework is opinionated
- It uses power asserts by default. There are no
assert_xyz, just power asserts (except for
- It uses the spec syntax for test case structure (
after). Reasons: No test-case name-clashes when using describe. Not forgetting to call super in setup/teardown methods.
- No nesting of describe blocks. IMO nesting of those blocks is an anti-pattern.
- No let-definitions. Only before / after hooks. Use local variables mostly.
- Tests have to be started explicitly by
Microtest.run!, no at-exit hook.
Add this to your application's
development_dependencies: microtest: github: ragmaanir/microtest version: ~> 1.2.1
And add this to your
require "../src/microtest" include Microtest::DSL Microtest.run!
describe MyLib::WaterPump do test "that it pumps water" do p = MyLib::WaterPump.new("main") p.enable p.start assert(p.pumps_water?) end test "that it pumps with a certain speed" do p = MyLib::WaterPump.new("main", speed: 100) p.enable p.start assert(p.pump_speed > 50) end test "this one is pending since it got no body" test "only run this focused test", :focus do end test! "and this one too since it is focused also" do end end
Run the test with:
You can provide the seed to run the tests in the same order:
SEED=123 crystal spec
Power Assert Output
describe AssertionFailure do test "assertion failure" do a = 5 b = "aaaaaa" assert "a" * a == b end end
Microtest Test Output (default reporter)
Select the used reporters:
Microtest.run!([ Microtest::DescriptionReporter.new, Microtest::ErrorListReporter.new, Microtest::SlowTestsReporter.new, Microtest::SummaryReporter.new, ] of Microtest::Reporter)
describe First do test "success" do end test "skip this" end describe Second do def raise_an_error raise "Oh, this is wrong" end test "first failure" do a = 5 b = 7 assert a == b * 2 end test "error" do raise_an_error end end
When focus active
describe Focus do test "not focused" do end test! "focused" do end test "focused too", :focus do end end
I am using guardian to run the tests on each change. Also the guardian task uses the computer voice to report build/test failure/success.
bin/build to run tests and generate
README.md.template and generate the images of the test outputs (using an alpine docker image).
- hooks (before, after, around), and global ones for e.g. global transactional fixtures
- Customizable reporters
- Capture timing info
- Randomization + configurable seed
- Reporter: list N slowest tests
- Write real tests for Microtest (uses JSON report to check for correct test output). Now tests are green.
- JSON reporter
- Continuous Integration with Travis
- generate README including examples from specs and terminal screenshots
- Print whether focus is active
- crtl+c to halt tests
- fail fast
- Number of assertions
- Alternatives to nesting? (Use separate describe blocks)
- Group tests and specify hooks and helper methods for the group only
- save results to file and compare current results to last results, including timings
- Display correct line numbers. This is difficult since macros are used everywhere.
- Some assertion failures cause segfaults