Skip to content

Contributor Guidelines

hackbinary edited this page Feb 11, 2011 · 8 revisions

Basic Guidelines

  • Please read through the GIT work flow Parrot GitHub workflow
  • Read through the Parrot workflow
  • Before commiting to cardinal/master, any new features must have associated tests proving that they work (and for regression testing later).
  • Similarly, you must run “rake test:all” and get a “CLEAN FOR COMMIT” at the end. If you don’t get that, there was a problem with the tests, and you should see an explanation of that.
  • You can use todo/skip to mark out tests that fail for whatever reason, after either finding or creating an issue for it on github. The issue number must be passed to those functions. (See “Tests and Issues” below.)

Build System

Cardinal uses Rake for its build/test system.

There still exists an old build system that’s slightly out of date now. I intend to remove it soon, after I add a “clean” target to the Rakefile.

Running just “rake” will build cardinal and test.pir. If you don’t want to rebuild test.pir just yet, you can specify “rake cardinal”.

All of the tests are in their own namespace(s) and there is always at least one target per test. e.g. t/array/uniq.t is test:array:uniq.

Each subnamespace has a target :all (test:array:all) that will run all of the tests in that namespace. At the top level (test:all), this will run all the top level tests (which are grouped under test:basic), and the :all targets for the subnamespaces. It will also, at the end, print a short blurb of statistics about how long the test suite took to run (usually 4-5 minutes) and how many tests failed or passed. If there was a problem, you’ll see it here, usually with a reference to the file the problem was found in.

If there were no problems with the test suite, you’ll see “CLEAN FOR COMMIT” as the last line. Otherwise, you’ll see (a) message(s) indicating what went wrong. You should investigate and fix the problem, then run test:all again.

Tests and Issues

Test files are added to the rakefile in their appropriate namespace, using the “test” function. Also make sure to add it to that namespace’s :all target (or test:basic if at the top level). Usually you’ll just be using the test:all target to check for clean?. The following things can prevent a source tree being declared clean for commit:

  • More failures than were expected. This usually means there was a regression, although sometimes this seems to happen when something upstream changes.
  • Any test files failed to give any output. Usually this means that there was a parsefail in that file.
  • Any test files gave output that was not expected. This will be reported as an “unknown or confusing result”. Could be that you’ve got something using puts in the test file incorrectly.
  • Any test files gave fewer test results than were expected. Usually this means that, while there wasn’t a parsefail, some implementation of something is broken.
  • Any test files gave more test results than were expected. This used to be more common, but when it happens now, it usually means that you forgot to update plan.
  • Any test files have expected failures that don’t have an associated Issue given. You’ll need to either find or create an issue on github, and pass that in a string as the second argument to todo/skip/skip_rest.

Writing Tests

There’s a small test library we use, so at the top of a test file, you should have:

require "Test"
include Test

At the top of a test file, plan(integer) is used to declare how many tests are expected to be conducted in the current file. Remember to increase this number when you add new tests.

After that, you can use any of the following functions to conduct tests or report the result of a test:

  • pass(description) is used to record a passing test, with a description of what was tested.
  • flunk(description) is used to record a failing test, with a description of what was tested.
  • proclaim(cond, description) is similar to the two above, except that you can pass an argument as a boolean indicating pass or fail.
  • ok(cond, description) is a convenience method that just calls proclaim.
  • nok(cond, description is like ok, except that it first flips cond. So if cond is true, then the test fails.
  • is(got, expected, description) tests for equality between got and expected, and passes if they are equal.
  • isnt(got, expected, description) reports a pass if got and expected are not equal to each other.
  • isgt(got, expected, description) reports a pass if got is greater than expected.
  • isge(got, expected, description) reports a pass if got is greater than or equal to expected.

If there are failing tests that aren’t going to be fixed, you can use these to indicate that they’re expected:

  • todo(description, issue, count=1) is used for tests that simply fail. Calling it before the fail report will add a TODO to the report, with a note to “See issue #{issue}.” If there are many in a row that fail, you can pass a different count to let the TODO stretch through all of them.
  • skip(description, issue, count=1) is used when there’s a parsefail or a crash causes the test to exit prematurely. You use it like todo, except that you then comment out the code that causes the parsefail/crash as well as the test, because skip records a failure automatically.
  • skip_rest(description, issue) is like skip, but it has an automatic count equal to the number of tests left. It’s usually used for when an entire file crashes.

If you don’t give an issue number to those, test:all will not give a CLEAN FOR COMMIT message. Make sure that any todo/skip has a filed issue, and reference its number. Also make sure to note in that issue which tests are related and use the “test-fail” label.


Issues should be filed on github for any test-fail, bug, or soon-to-be implemented feature. The following are the labels and how they should be used:

  • pir is attached to any issue that needs to be fixed somewhere in one of the PIR files.
  • parser is attached to any issue that needs to be fixed somewhere in or
  • test-fail is attached to any issue that is related to a test failure. The description or a comment should note which tests are failing as a result of the bug. Make sure that the todo/skip in the test file references the issue number.
  • needs-info is attached to any issue that still needs more information before it can be fixed. Either research to determine the cause of the problem, or more information from the reporter.
  • action is attached to any issue for which action is possible/expected soon. Usually this is attached after needs-info is removed, but it’s possible for an issue to have both labels if only part of the problem still needs more info.
  • waiting-on is attached to an issue that’s waiting for another issue to be resolved first, often because it’s expected that the resolution of that issue will help with the issue that is waiting-on. Make sure to note which issue it’s waiting on, and also note this issue in the one it’s waiting on, with the waited-on label.
  • waited-on is attached to an issue that another issue is waiting-on.
  • has-branch is used to indicate that an issue has an associated branch pushed up to github. Such branches take the form issN, where N is the issue number.

Generally any bug should have at least one of needs-info, waiting-on, or action

Something went wrong with that request. Please try again.