- How do I make sure I have the latest version of the Jacquard library?
- What configuration options are there?
- How do I use Checkstyle?
- What's PMD? How do I use it?
- How do I set test result visibility?
- How is code coverage measured?
- What is cross-testing?
- Why was the name "Jacquard" chosen?
- Where can I view the Javadoc?
- Where can I get support?
If you began using Jacquard before version 1.0.0 was released (March 9, 2024), your build.gradle
file will reference
the snapshot version of Jacquard. Make the changes shown in commit Use Jacquard 1.00 instead of snapshot.
You can always see the latest release number at Jacquard releases or Sonatype. As of March 22, 2024, it is 1.0.1.
There are currently 3 configurable values:
timeout
(default:10_000L
), how many milliseconds to run a test before termination; a value of0
means never to timeoutjavaLevel
(default: 17), the Java language level used for syntax-based gradersvisibility
(default:Visibility.VISIBLE
), the visibility of test results (except forJUnitTester
results, which are specified differently)
To use the default values, call Autograder.init()
at the start of your program. Here's how to explicitly set other values:
Autograder.Builder builder = Autograder.Builder.getInstance();
// By default, tests time out in 10,000 ms if they don't complete.
builder.timeout(5000); // set timeout to 5 s
// By default, Java level 17 is used.
builder.javaLevel(11); // use Java level 11
// By default, all tests results are visible.
builder.visibility(Visibility.HIDDEN); // hide test results
builder.build();
This can be written more concisely:
Autograder.Builder.getInstance()
.timeout(5000)
.javaLevel(11)
.visibility(Visibility.HIDDEN)
.build();
See also the Autograder configuration chapter (0:15-2:06) from Example 2.1: Going through a more complicated AutograderMain.
For general usage information, see Checkstyle website, especially Checkstyle configuration.
Here is how to create a CheckstyleGrader
in Jacquard:
CheckstyleGrader checkstyleGrader = new CheckstyleGrader(
"config/checkstyle-rules.xml", // path to configuration file
1.0, // penalty per violation
5.0); // maximum penalty/points
See also the CheckstyleGrader
javadoc.
We recommend putting your configuration file in your project's config/
directory so it is copied to Gradescope. We also recommend sharing it with
students so they can run checkstyle in their
IDE (IntelliJ plugin,
Eclipse plugin)
before uploading. The IntelliJ plugin supports using a local configuration
file or accessing one via URL, so students don't need to download it
(but will need to configure the plugin to point to it).
For more detail, see Jacquard Example 0.
PMD (which is not an acronym) is a source code analyzer
capable of more complex checks than Checkstyle, such as whether the @Override
annotation is always used where permitted.
PMD rules are organized into rulesets, which, as the name suggests, are sets of rules.
You can make your own rulesets
or use Java rulesets
built in to PMD, such as category/java/bestpractices.xml
.
Jacquard's PMDGrader has two static factory methods:
createFromRuleSetPaths()
, which lets you specify one or more rulesets to be used in their entirety [used in Jacquard Example 0]createFromRules()
, which lets you specify one ruleset and one or more rules from that ruleset [used in Jacquard Example 2]
There are PMD plugins for IntelliJ and Eclipse.
Gradescope specifies four levels of visibility in Autograder Specifications:
hidden
: test case will never be shown to studentsafter_due_date
: test case will be shown after the assignment's due date has passed. If late submission is allowed, then test will be shown only after the late due date.after_published
: test case will be shown only when the assignment is explicitly published from the "Review Grades" pagevisible
(default): test case will always be shown
These is a one-to-one correspondence between these visibility levels and the enumerated type Visibility
.
Unless otherwise specified, all test results are immediately visible
to students.
Unit tests run through JUnitTester
(as
opposed to the cross-tester) must be annotated with @GradedTest
. The
attribute visibility
has the default value Visibility.VISIBLE
but
can be set to any other visibility. This code is from Jacquard Example 0:
@Test
@GradedTest(name = "works for empty list", points = 5.0, visibility = Visibility.AFTER_PUBLISHED)
public void iteratorOverEmptyList() {
FavoritesIterator<String> iterator = new FavoritesIterator<>(favoriteHotSauces0);
// No items should be returned.
assertFalse(iterator.hasNext());
assertThrows(NoSuchElementException.class, () -> iterator.next());
}
The visibility level can be set for all other types of autograder results through the initial configuration.
The visibility level of a generated Result
can be mutated by calling the changeVisibility(Visibility visibility)
instance method or Result.changeVisibility(List<Result> results, Visibility visibility)
, as shown:
// Use the default configuration, which includes full visibility.
Autograder.init();
final Target target = Target.fromClass(FavoritesIterator.class);
List<Result> results = new ArrayList();
// PMD results should be visible only after the due date.
PmdGrader pmdGrader = PmdGrader.createFromRules(
1.0,
5.0,
"category/java/bestpractices.xml",
"MissingOverride");
List<Result> pmdResults = pmdGrader.grade(target);
// Change visibility before adding to results.
Result.changeVisibility(pmdResults, Visibility.AFTER_DUE_DATE);
results.addAll(pmdResults);
Code coverage is measured using JaCoCo. We recommend having students run JaCoCo inside IntelliJ or Eclipse, because the plugins show which lines of code are exercised by the tests.
When creating a CodeCoverageTester
,
a Scorer
must be
provided to convert the line and branch coverage percentages into points. The concrete scorers are
provided:
LinearScorer
, which uses a linear function of the line and branch coverage percentagesLinearBranchScorer
, which uses a linear function of the branch coverage percentage (ignoring line coverage)LinearLineScorer
, which uses a linear function of the line coverage percentage (ignoring branch coverage) If you want to write your own scorer, we suggest viewingLinearScorer.java
.
Cross-testing is my term for running multiple sets of tests against multiple implementations. Most autograders only run instructor tests against student code. Jacquard also supports running student tests against multiple versions of instructor code.
Cross-testing using submitted test code is specified by a CSV file, such as
Example 2's student-tests.csv
:
student | correct | buggy | |
---|---|---|---|
size | 10 | 5 | -5 |
concat | 20 | 10 | -10 |
The header and first row mean:
- If the tests do not report any errors on the implementation of the
size()
method in thestudent
package, 10 points are earned. - If the tests do not report any errors on the implementation of the
size()
method in thecorrect
package, 5 points are earned. - If the tests do report an errors on the implementation of the
size()
method in thebuggy
package, 5 points are earned.
The negative signs in the "buggy" column indicate that the tests are inverted (i.e., points are earned if they fail).
Test names must start with the name of the method under test, such as sizeWorksForEmptyList()
for tests of size()
.
This excerpt from Example 2's main()
method
shows how the cross-tester is programmatically created and run:
// Create CrossTester to run student tests on:
// * student code (20 points)
// * hidden correct implementation (15 points)
// * hidden buggy implementation (15 points)
// Grading detail is in student-tests.csv.
CrossTester crossTester = new CrossTester(
student.ILOSTest.class, // the test to run
"student-tests.csv" // the name of the CSV file
);
results.addAll(crossTester.run());
See also the Example 2 documentation for needed changes to config.ini
and the Example 2 cross-tester video.
The CSV files used for cross-testing made me think of looms, such as the looms created by Joseph Marie Jacquard, which were controlled by punched cards so play an important role in computing history. Also, the starting letters correspond to Java or Java Autograder. Claude.ai suggested this backronym:
- Java
- Assignment
- Checking with
- Quality
- Unit-testing,
- Analysis,
- Reporting, and
- Diagnostics
The Javadoc is available at https://jacquard.ellenspertus.com/. There are also linked badges at the bottom of Markdown pages, such as this one.
There are low-volume Google groups jacquard-announce and jacquard-discuss.
You can also create issues (feature requests and bug reports).