Skip to content

Latest commit

 

History

History
376 lines (264 loc) · 12.4 KB

testing.pod

File metadata and controls

376 lines (264 loc) · 12.4 KB

Testing

Testing is the process of writing and running automated verifications that your software performs as intended, in whole or in part. At its heart, this is an automation of a process you've performed countless times already: write a bit of code, run it, and see if it works. The difference is in the automation. Rather than performing these manual steps and relying on humans to do everything perfectly every time, let the computer handle the repetition.

Perl 5 provides great tools to help you write good and useful automated tests.

Test::More

Perl testing begins with the core module Test::More and its single ok() function. ok() takes two parameters, a boolean value and a string describing the purpose of the test:

Ultimately, any condition you can test for in your program should become a binary value. Does the code work as I intended? A complex program may have thousands of these individual conditions. In general, the smaller the granularity the better. The purpose of writing individual assertions is to isolate individual features to understand what doesn't work as you intended and what ceases to work after you make changes in the future.

This snippet isn't a complete test script, however. Test::More and related modules require the use of a test plan, which represents the number of individual tests you plan to run:

The tests argument to Test::More sets the test plan for the program. This gives the test an additional assertion. If fewer than four tests ran, something went wrong. If more than four tests ran, something went wrong. That assertion is unlikely to be useful in this simple scenario, but it can catch bugs in code that seems too simple to have errorsAs a rule, any code you brag about being too simple to contain errors will contain errors at the least opportune moment..

Running Tests

The resulting program is now a full-fledged Perl 5 program which produces the output:

This format adheres to a standard of test output called TAP, the Test Anything Protocol (http://testanything.org/). Note that the failed tests provide diagnostic messages about what failed and where. This is a tremendous aid to debugging.

The output of a test file containing multiple assertions (especially multiple failed assertions) can be verbose. In most cases, you want to know either that everything passed or that x, y, and z failed. The core module Test::Harness interprets TAP and and displays only the most pertinent information. It also provides a program called prove which takes the hard work out of the process:

That's a lot of output to display what is already obvious: the second and third tests fail because zero and the empty string evaluate to false. It's easy to fix that failure by inverting the sense of the condition with the use of boolean coercion (boolean_coercion):

With those two changes, prove now displays:

Better Comparisons

Even though the heart of all automated testing is the boolean condition "is this true or false?", reducing everything to that boolean condition is tedious and offers few diagnostic possibilities. Test::More provides several other convenient functions to ensure that your code behaves as you intend.

The is() function compares two values. If they match, the test passes. Otherwise, the test fails and provides a relevant diagnostic message:

As you might expect, the first test passes and the second fails:

Where ok() only provides the line number of the failing test, is() displays the mismatched values.

ok() applies implicit scalar context to its values. This means, for example, that you can check the number of elements in an array without explicitly evaluating the array in scalar context:

... though some people prefer to write scalar @cousins for the sake of clarity.

Test::More provides a corresponding isnt() function which passes if the provided values are not equal. Otherwise, it behaves the same way as is() with respect to scalar context and comparison types.

Both is() and isnt() perform string comparisons with the Perl 5 operators eq and ne. This almost always does the right thing, but for complex values such as objects with overloading (overloading) or dual vars (dualvars), you may prefer explicit comparison testing. The cmp_ok() function allows you to specify your own comparison operator:

Classes and objects provide their own interesting ways to interact with tests. Test that a class or object extends another class (inheritance) with isa_ok():

isa_ok() provides its own diagnostic message on failure.

can_ok() verifies that a class or object can perform the requested method (or methods):

The is_deeply() function compares two references to ensure that their contents are equal:

If the comparison fails, Test::More will do its best to provide a reasonable diagnostic indicating the position of the first inequality between the structures. See the CPAN modules Test::Differences and Test::Deep for more configurable tests.

Test::More has several more test functions, but these are the most useful.

Organizing Tests

The standard CPAN approach for organizing tests is to create a t/ directory containing one or more programs ending with the .t suffix. All of the CPAN distribution management tools (and the CPAN infrastructure itself) understand this system. By default, when you build a distribution with Module::Build or ExtUtils::MakeMaker, the testing step runs all of the t/*.t files, summarizes their output, and succeeds or fails on the results of the test suite as a whole.

There are no concrete guidelines on how to manage the contents of individual .t files, though two strategies are popular:

  • Each .t file should correspond to a .pm file

  • Each .t file should correspond to a feature

The important considerations are maintainability of the test files, as larger files are more difficult to maintain than smaller files, and the granularity of the test suite. A hybrid approach is the most flexible; one test can verify that all of your modules compile, while other tests verify that each module behaves as intended.

It's often useful to run tests only for a specific feature under development. If you're adding the ability to breathe fire to your RobotMonkey, you may want only to run the t/breathe_fire.t test file. When you have the feature working to your satisfaction, run the entire test suite to verify that local changes have no unintended global effects.

Coverage Testing and Profiling

What do we have here, Devel::Cover, Devel::NYTProfiler (iirc), etc. Very useful for knowing if that change would be expected to break that test. Profiler to test your code for performance. Both essential tools!

Other Testing Modules

Test::More relies on a testing backend known as Test::Builder. The latter module manages the test plan and coordinates the test output into TAP. This design allows multiple test modules to share the same Test::Builder backend. Consequently, the CPAN has hundreds of test modules available--and they can all work together in the same program.

  • Test::Exception provides functions to ensure that your code throws (and does not throw) exceptions appropriately.

  • Test::MockObject and Test::MockModule allow you to test difficult interfaces by mocking (emulating but producing different results).

  • Test::WWW::Mechanize allows you to test live web applications.

  • Test::Database provides functions to test the use and abuse of databases.

  • Test::Class offers an alternate mechanism for organizing test suites. It allows you to create classes in which specific methods group tests. You can inherit from test classes just as your code classes inherit from each other. This is an excellent way to reduce duplication in test suites. See the Test::Class series written by Curtis Poe at http://www.modernperlbooks.com/mt/2009/03/organizing-test-suites-with-testclass.html.

The Perl QA project (http://qa.perl.org/) is a primary source of test modules as well as wisdom and practical experience making testing in Perl easy and effective.

POD ERRORS

Hey! The above document had some coding errors, which are explained below:

Around line 3:

A non-empty Z<>

Around line 62:

Deleting unknown formatting code N<>

Around line 83:

A non-empty Z<>

Around line 108:

Deleting unknown formatting code U<>

Around line 338:

'=end' without a target?

Around line 365:

Deleting unknown formatting code U<>

Around line 374:

Deleting unknown formatting code U<>