Summary
`wheels test` produces output that's silent enough to be useless. Tutorial chapter 7 ("Run the tests") asks the user to write three specs and run them; the CLI says `0 passed` and provides nothing else — no list of discovered specs, no failure messages, no skip count, no errors. There's no way to tell whether the runner ran zero tests, ran them and they all passed silently, or hit an error during collection.
Repro
```
$ wheels new blog && cd blog
write tests/specs/models/PostSpec.cfc, controllers/PostsControllerSpec.cfc, browser/SignupFlowSpec.cfc per chapter 7
$ wheels doctor
...
✓ 3 test file(s) found
$ wheels test
Running core tests (sqlite)...
0 passed
$ wheels test --verbose
Running core tests (sqlite)...
0 passed
$ wheels test --filter=PostSpec
Running core tests (sqlite)...
0 passed
$ wheels test --reporter=verbose
Running core tests (sqlite)...
0 passed
```
`wheels doctor` confirms the runner sees the spec files, but `wheels test` does nothing visible with them.
Expected (per the chapter-7 docs)
```
Running tests in tests/specs...
PostSpec: 4 passed
PostsControllerSpec: 1 passed
SignupFlowSpec: 1 passed
Total: 6 passed, 0 failed, 0 errors
```
Suggested fix
- Always print `collected / passed / failed / skipped / errored` counts even when all are zero — the difference between "discovered 0 specs" and "discovered 6 specs, ran 0" is the entire signal.
- Add `wheels test --list` (or default behavior in verbose) that prints which spec files the runner discovered and which ones it would execute.
- Surface the underlying test-runner errors (population, datasource, collection failure) instead of swallowing them.
- Ship a `tests/populate.cfm` skeleton with `wheels new`, or generate one on `wheels generate model`. The chapter-7 troubleshooting block hints at populate.cfm but the current silent output never gets the user there.
- Optionally: `wheels generate test ModelName` to create both a spec stub and the matching populate entry.
Severity
Blocker for the chapter-7 use case ("write tests, run them, watch them pass"). The chapter is functionally unverifiable at this UX level.
Source
Fresh-VM onboarding journal, 2026-04-25, finding #7.
Summary
`wheels test` produces output that's silent enough to be useless. Tutorial chapter 7 ("Run the tests") asks the user to write three specs and run them; the CLI says `0 passed` and provides nothing else — no list of discovered specs, no failure messages, no skip count, no errors. There's no way to tell whether the runner ran zero tests, ran them and they all passed silently, or hit an error during collection.
Repro
```
$ wheels new blog && cd blog
write tests/specs/models/PostSpec.cfc, controllers/PostsControllerSpec.cfc, browser/SignupFlowSpec.cfc per chapter 7
$ wheels doctor
...
✓ 3 test file(s) found
$ wheels test
Running core tests (sqlite)...
0 passed
$ wheels test --verbose
Running core tests (sqlite)...
0 passed
$ wheels test --filter=PostSpec
Running core tests (sqlite)...
0 passed
$ wheels test --reporter=verbose
Running core tests (sqlite)...
0 passed
```
`wheels doctor` confirms the runner sees the spec files, but `wheels test` does nothing visible with them.
Expected (per the chapter-7 docs)
```
Running tests in tests/specs...
PostSpec: 4 passed
PostsControllerSpec: 1 passed
SignupFlowSpec: 1 passed
Total: 6 passed, 0 failed, 0 errors
```
Suggested fix
Severity
Blocker for the chapter-7 use case ("write tests, run them, watch them pass"). The chapter is functionally unverifiable at this UX level.
Source
Fresh-VM onboarding journal, 2026-04-25, finding #7.