What's the idea?
Test suite currently mixes old contracts, new allocator behavior, and planned-but-not-yet-implemented telemetry. A good part is still useful, but several tests are now stale and will fail against the current server.js. The biggest mismatches are around heartbeat_at, worker progress telemetry, /api/v1/stats shape, and a few legacy assumptions.
Here is the review.
Overall assessment
The suite is still a strong base. It already covers:
- work assignment
- submit semantics
- reclaim behavior
- admin endpoints
- allocator bootstrap behavior
- virtual chunk allocation invariants
- late FOUND handling
- auth
That is the right backbone.
But right now the tests are in three generations:
-
Current valid tests
These still match your backend fairly well.
-
Tests for old or removed concepts
These refer to fields or tables that are no longer part of the live behavior.
-
Tests for telemetry/UI-support fields that were discussed later, but are not yet implemented in the backend code you pasted
These should either be skipped for now or rewritten only after the backend fields exist.
-
helpers.js review
helpers.js is mostly fine, but there are two issues.
A. alloc_order_vchunks table is obsolete
You create:
CREATE TABLE alloc_order_vchunks (...)
But your current backend no longer uses a materialized allocation-order table.
It uses the affine permutation directly.
So this table is dead test scaffolding and should be removed.
B. heartbeat_at exists in test schema, but not in current backend migrations/logic
In test schema you have:
But the current backend code you pasted:
- does not create heartbeat_at in production bootstrap
- does not add it with ALTER TABLE
- does not write it in /api/v1/work
- does not update it in /api/v1/heartbeat
- does not expose it in /api/v1/stats
So your in-memory schema is ahead of the actual backend.
That is dangerous because tests can pass against fake schema/features that prod does not really have.
Recommendation for helpers.js
Do this:
- remove alloc_order_vchunks
- remove heartbeat_at unless you implement it in backend now
- keep test schema as close as possible to real production bootstrap schema
- Tests that are still good
These are conceptually good and should remain, with at most tiny adjustments.
/api/v1/work
These are good:
- missing name → 400
- no active puzzle → 503
- returns job id and range
- inserts assigned chunk
- reclaimed chunk preferred over fresh
- test chunk offered first / not twice
/api/v1/submit
These are good:
- mark completed on done
- mark FOUND
- wrong worker cannot submit another worker’s chunk
- invalid keys_scanned
- reject done without keys_scanned
- accept exact size
- accept overshoot
- reclaim on underscan
Admin/auth
These are good:
- set-puzzle validation
- activate-puzzle
- admin token middleware
Allocator correctness
These are also good:
- affine permutation uniqueness/determinism
- virtual chunk seeding behavior
- bootstrap midpoint / beginning / end
- no overlap
- exhaustion
- reclaimed before fresh
- vchunk_start / vchunk_end persisted
- late FOUND semantics
- test chunks excluded from stats
These are valuable regression tests and worth keeping.
- Tests that are stale or currently incompatible
A. heartbeat_at tests are stale
This test is currently wrong against your backend:
test('updates heartbeat_at on a valid job', ...)
Why wrong:
- /api/v1/heartbeat in current backend updates assigned_at, not heartbeat_at
- heartbeat_at is not maintained
So this test does not match the current implementation.
What to do
Choose one:
Option 1: keep current backend as-is
Then rewrite the test to reflect actual behavior:
- heartbeat updates assigned_at
- no heartbeat_at assertions
Option 2: implement proper telemetry
Then keep the test idea, but update backend to:
- add heartbeat_at column
- set heartbeat_at = CURRENT_TIMESTAMP on assignment
- update only heartbeat_at in /heartbeat
- keep assigned_at as original job start time
This is the better long-term model.
B. /api/v1/stats shape tests are ahead of backend
These tests currently do not match the pasted backend:
stats response includes target_minutes, timeout_minutes, active_minutes
worker with assigned chunk exposes current_job fields and heartbeat_at
worker without assigned chunk has null current_job fields
Your current /api/v1/stats response does not include:
- target_minutes
- timeout_minutes
- active_minutes
- current_job_keys
- current_job_start_hex
- current_job_end_hex
- assigned_at
- heartbeat_at
So these tests are for a newer telemetry contract than the backend currently returns.
Recommendation
Mark these as:
- pending, or
- remove until backend telemetry is implemented
C. current_vchunk_run string-format tests are becoming brittle
Examples:
expect(worker.current_vchunk_run).toBe('5..5');
expect(w1.current_vchunk_run).toMatch(/^\d+\.\.\d+$/);
These are fragile because your frontend is moving toward:
- separate numeric fields
- compact display formatting
- progress metadata
The backend should not be forced forever to expose only a preformatted string.
Better contract
Prefer tests for raw fields:
- current_vchunk_run_start
- current_vchunk_run_end
And only optionally keep current_vchunk_run as a backward-compatible convenience field.
D. Legacy allocator tests sometimes call default virtual allocator by accident
Example:
test('3. exhaustion: 503 when all sectors are done', ...)
This one seeds with:
seedPuzzle(db, { start_hex: ..., end_hex: ... });
But seedPuzzle defaults to ALLOC_STRATEGY_VCHUNKS, not legacy.
The test name and expectation talk about sectors, but the puzzle is seeded as virtual chunks.
That makes the test semantically confusing, even if it accidentally passes.
Fix
For tests about sector allocator, always do:
seedPuzzle(db, { ..., strategy: ALLOC_STRATEGY_LEGACY });
E. Some tests insert heartbeat_at manually into chunks
For example:
INSERT INTO chunks (... assigned_at, heartbeat_at) VALUES ...
But production backend does not use heartbeat_at.
So again, test scaffolding is ahead of real code.
- Concrete list: tests to keep, rewrite, or remove
Keep as-is or nearly as-is
Keep:
- most /api/v1/work
- most /api/v1/submit
- admin auth
- admin activation
- affine permutation tests
- virtual chunk seeding tests
- bootstrap allocation tests
- overlap/exhaustion/reclaim priority tests
- late FOUND tests
- exclusion of test chunks from stats
Rewrite
Rewrite:
- heartbeat tests
- stats telemetry tests
- current_vchunk string-format tests
- any legacy-sector test that forgot to set strategy: ALLOC_STRATEGY_LEGACY
Remove
Remove:
- alloc_order_vchunks schema/table from helpers
- any assertions that depend on a materialized alloc-order table
- heartbeat_at only if you decide not to implement it
- What is missing in the test suite now
Given your current project state, I would add these.
A. Progress telemetry tests
Once backend telemetry exists, add tests for:
- current_job_keys
- current_job_elapsed_seconds
- current_job_progress_percent
- null values when no job assigned
- stale worker still has telemetry but active=false
B. Worker row consistency tests
You now use worker dashboard data heavily. Add tests that ensure:
- assigned worker returns current chunk id
- current vchunk start/end are numeric
- run size equals vchunk_end - vchunk_start
- job hex range matches vchunk range for virtual allocator
C. Migration-safe stats tests
Very important now that you migrated old prod data:
- completed vchunk sums are counted correctly
- started/completed virtual chunk counters use SUM(vchunk_end - vchunk_start)
- assigned chunk not counted as completed
- FOUND counted as completed
- no overlap double counting in total key coverage
D. Bootstrap stage progression tests
You already test 0/1/2/3 nicely. Add one more:
- after stage 3, allocator uses affine probing and advances alloc_cursor
E. Reclaim timeout model
You have reclaim behavior, but not the periodic reclaim timer itself.
At minimum test the SQL logic separately.
- Best structural cleanup for the suite
I would reorganize server.test.js into these blocks:
- work.assignment.test.js
- submit.test.js
- stats.test.js
- admin.test.js
- allocator.legacy.test.js
- allocator.vchunks.test.js
- late-found.test.js
That makes it much easier to see which failures belong to:
- API contract
- allocator
- telemetry/dashboard
- admin flows
Right now the file is already large enough that splitting would help.
- My main verdict
Your tests are not bad. The problem is not low quality.
The problem is that the suite has drifted slightly ahead of the backend in some telemetry areas.
The biggest mismatches are:
- heartbeat_at
- stats timing fields
- current job telemetry fields
- obsolete alloc_order_vchunks
- some legacy tests using default virtual strategy accidentally
So the right move is not to rewrite everything.
It is to do a focused cleanup pass.
- Exact cleanup checklist
In helpers.js
Remove:
CREATE TABLE alloc_order_vchunks ...
CREATE UNIQUE INDEX idx_alloc_order_vchunks_order ...
CREATE UNIQUE INDEX idx_alloc_order_vchunks_chunk ...
Decide on heartbeat_at:
- remove from test schema now, or
- implement it in backend immediately and keep it
In server.test.js
Mark for rewrite:
- updates heartbeat_at on a valid job
- stats response includes target_minutes, timeout_minutes, active_minutes
- worker with assigned chunk exposes current_job fields and heartbeat_at
- worker without assigned chunk has null current_job fields
Fix legacy allocator tests to explicitly pass:
strategy: ALLOC_STRATEGY_LEGACY
Prefer raw run fields over string-only checks:
- current_vchunk_run_start
- current_vchunk_run_end
- Recommended telemetry direction
Since your dashboard now depends more and more on real-time worker status, I would strongly recommend you do implement proper telemetry instead of dumbing the tests down.
That means:
- add heartbeat_at to chunks
- on assignment:
- assigned_at = CURRENT_TIMESTAMP
- heartbeat_at = CURRENT_TIMESTAMP
- on heartbeat:
- in stats:
- expose assigned_at
- expose heartbeat_at
- expose current_job_keys
- expose current_job_elapsed_seconds
- expose current_job_progress_percent
Then the tests you already started writing become valid and useful.
Puzzpool test-suite patch plan
Goal
Bring the test suite back into full alignment with the current backend contract, while also preparing it for the next telemetry upgrade needed by the dashboard.
This patch plan covers:
- which tests to delete
- which tests to rewrite
- which backend telemetry fields to add
- exact replacement code blocks for
helpers.js
- exact replacement code blocks for failing / stale test sections
1. Summary of current mismatch
The current test suite mixes 3 generations of behavior:
-
Current backend behavior
These tests are still valid and should stay.
-
Old or obsolete internals
These should be removed.
-
Planned telemetry fields not yet implemented in backend
These should be implemented in backend and then tested properly.
The biggest mismatches are:
heartbeat_at exists in tests, but not in current production backend schema/logic
/api/v1/stats tests expect telemetry fields not yet returned
alloc_order_vchunks exists in test DB but is no longer used
- some “legacy allocator” tests accidentally seed the default virtual allocator instead of the legacy one
- some tests rely on formatted string fields where raw numeric fields would be more stable
2. What to delete
2.1 Delete obsolete table from helpers.js
Delete this whole block from the in-memory schema:
CREATE TABLE alloc_order_vchunks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
puzzle_id INTEGER NOT NULL,
order_index INTEGER NOT NULL,
chunk_index INTEGER NOT NULL
);
CREATE UNIQUE INDEX idx_alloc_order_vchunks_order ON alloc_order_vchunks (puzzle_id, order_index);
CREATE UNIQUE INDEX idx_alloc_order_vchunks_chunk ON alloc_order_vchunks (puzzle_id, chunk_index);
Reason:
- current allocator uses affine permutation directly
- no materialized allocation-order table is used anymore
- keeping this in tests creates fake schema divergence
- What to rewrite
3.1 Rewrite heartbeat tests
Current problem
The current tests assume:
- heartbeat_at exists
- /api/v1/heartbeat updates heartbeat_at
- assigned_at stays unchanged
This is the recommended future design, but the current backend pasted in the discussion does not implement it yet.
Decision
We should implement proper telemetry in backend and then keep the stronger heartbeat tests.
So:
- do not delete heartbeat tests
- rewrite them against the improved backend telemetry model below
3.2 Rewrite /api/v1/stats telemetry tests
Current problem
These tests expect fields that are not yet returned by backend:
- target_minutes
- timeout_minutes
- active_minutes
- current_job_keys
- current_job_start_hex
- current_job_end_hex
- assigned_at
- heartbeat_at
Decision
Implement these fields in backend and keep the tests.
3.3 Rewrite worker current-run assertions to prefer raw numeric fields
Current problem
Some tests assert only string formatting like:
- current_vchunk_run === '5..5'
- regex on current_vchunk_run
These are fragile.
Decision
Keep backward-compatible current_vchunk_run, but add and test:
- current_vchunk_run_start
- current_vchunk_run_end
Then only lightly test the formatted string.
3.4 Rewrite legacy allocator tests to explicitly seed legacy puzzles
Some tests talk about sectors / legacy frontier allocation, but use:
Since seedPuzzle() defaults to virtual allocator, that is semantically wrong.
All legacy allocator tests must explicitly use:
seedPuzzle(db, { strategy: ALLOC_STRATEGY_LEGACY })
- New telemetry fields to add in backend
4.1 Add heartbeat_at to chunks
Why
assigned_at and heartbeat_at represent two different things:
- assigned_at = when the job was assigned to the worker
- heartbeat_at = most recent proof that the worker is still alive on that job
This distinction is required for:
- accurate dashboard progress
- cleaner stale-worker logic
- future worker diagnostics
Schema change
Add to production bootstrap / migration:
ALTER TABLE chunks ADD COLUMN heartbeat_at DATETIME;
And initialize existing rows if needed:
UPDATE chunks
SET heartbeat_at = assigned_at
WHERE heartbeat_at IS NULL AND assigned_at IS NOT NULL;
4.2 Persist worker job telemetry in /api/v1/stats
For every visible worker, expose:
- current_job_id
- current_job_keys
- current_job_start_hex
- current_job_end_hex
- assigned_at
- heartbeat_at
- current_vchunk_run_start
- current_vchunk_run_end
- current_vchunk_run
- current_job_elapsed_seconds
- current_job_progress_percent
Also add top-level config values:
- target_minutes
- timeout_minutes
- active_minutes
These are useful for dashboard logic and for tests.
4.3 Progress calculation model
Backend should compute progress as an estimate.
Recommended formula:
- elapsed_seconds = now - assigned_at
- estimated_done_keys = hashrate * elapsed_seconds
- current_job_keys = end - start
- progress_percent = min(100, estimated_done_keys / current_job_keys * 100)
Rules:
- return null if missing assigned_at, hashrate, or current assigned chunk
- clamp to [0, 100]
- stale worker may still have progress percent; active is a separate concept
- Exact backend implementation instructions
5.1 Schema migration additions
In production bootstrap area, add:
try { db.prepare("ALTER TABLE chunks ADD COLUMN heartbeat_at DATETIME").run(); } catch (_) {}
Then add normalization:
try {
db.prepare(`
UPDATE chunks
SET heartbeat_at = assigned_at
WHERE heartbeat_at IS NULL AND assigned_at IS NOT NULL
`).run();
} catch (_) {}
5.2 On fresh assignment and reclaim re-assignment, set heartbeat_at
Replace stmtInsertChunk with:
const stmtInsertChunk = db.prepare(`
INSERT INTO chunks (
puzzle_id, start_hex, end_hex, status,
worker_name, assigned_at, heartbeat_at, is_test,
sector_id, alloc_block_id,
vchunk_start, vchunk_end
) VALUES (?, ?, ?, 'assigned', ?, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, 0, NULL, NULL, ?, ?)
`);
Replace stmtTestChunkInsert with:
const stmtTestChunkInsert = db.prepare(`
INSERT INTO chunks (
puzzle_id, start_hex, end_hex, status,
worker_name, assigned_at, heartbeat_at, is_test
)
VALUES (?, ?, ?, 'assigned', ?, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, 1)
RETURNING *
`);
Replace stmtTestChunkReclaim with:
const stmtTestChunkReclaim = db.prepare(`
UPDATE chunks
SET status = 'assigned',
worker_name = ?,
assigned_at = CURRENT_TIMESTAMP,
heartbeat_at = CURRENT_TIMESTAMP
WHERE id = (
SELECT id
FROM chunks
WHERE puzzle_id = ? AND start_hex = ? AND end_hex = ? AND is_test = 1 AND status = 'reclaimed'
LIMIT 1
)
RETURNING *
`);
Replace reclaimed-chunk reassign SQL in /api/v1/work with:
const reclaimed = isReactivating ? null : db.prepare(`
UPDATE chunks
SET status = 'assigned',
worker_name = ?,
assigned_at = CURRENT_TIMESTAMP,
heartbeat_at = CURRENT_TIMESTAMP
WHERE id = (
SELECT id
FROM chunks
WHERE status = 'reclaimed' AND puzzle_id = ? AND is_test = 0
LIMIT 1
)
RETURNING *
`).get(name, puzzle.id);
5.3 Update /api/v1/heartbeat to only update heartbeat_at
Replace current heartbeat chunk update with:
db.prepare(`
UPDATE chunks
SET heartbeat_at = CURRENT_TIMESTAMP
WHERE id = ? AND worker_name = ? AND status = 'assigned'
`).run(job_id, name);
Do not update assigned_at here.
5.4 Add helper: computeWorkerProgressPercent()
Add this helper near the other pure helpers:
function computeWorkerProgressPercent(worker, currentJobKeys, assignedAtIso) {
if (!worker || !assignedAtIso) return null;
const hashrate = Number(worker.hashrate);
if (!Number.isFinite(hashrate) || hashrate <= 0) return null;
let totalKeys;
try {
totalKeys = BigInt(currentJobKeys);
} catch (_) {
return null;
}
if (totalKeys <= 0n) return null;
const assignedMs = Date.parse(String(assignedAtIso) + 'Z');
if (!Number.isFinite(assignedMs)) return null;
const elapsedSeconds = Math.max(0, (Date.now() - assignedMs) / 1000);
const estimatedDone = BigInt(Math.floor(hashrate * elapsedSeconds));
let pct = Number(estimatedDone * 10000n / totalKeys) / 100;
if (!Number.isFinite(pct)) return null;
if (pct < 0) pct = 0;
if (pct > 100) pct = 100;
return pct;
}
5.5 Extend worker stats query / mapping in /api/v1/stats
Add assigned chunk details query
Add this query in /api/v1/stats:
const assignedNow = puzzle ? db.prepare(`
SELECT
c.id,
c.worker_name,
c.start_hex,
c.end_hex,
c.assigned_at,
c.heartbeat_at,
c.vchunk_start,
c.vchunk_end
FROM chunks c
WHERE c.status = 'assigned' AND c.puzzle_id = ? AND c.is_test = 0
`).all(puzzle.id) : [];
Replace the current worker maps block with:
const workerChunkMap = {};
for (const c of assignedNow) {
const jobKeys = (BigInt('0x' + c.end_hex) - BigInt('0x' + c.start_hex)).toString();
workerChunkMap[c.worker_name] = {
current_job_id: c.id,
current_job_keys: jobKeys,
current_job_start_hex: c.start_hex,
current_job_end_hex: c.end_hex,
assigned_at: c.assigned_at,
heartbeat_at: c.heartbeat_at,
current_vchunk_run_start: c.vchunk_start ?? null,
current_vchunk_run_end: c.vchunk_end ?? null,
current_vchunk_run:
(c.vchunk_start !== null && c.vchunk_end !== null)
? `${c.vchunk_start}..${c.vchunk_end - 1}`
: null,
};
}
Replace const workers = visibleWorkers.map(...) with:
const workers = visibleWorkers.map(w => {
const current = workerChunkMap[w.name] || null;
const currentJobKeys = current ? current.current_job_keys : null;
const assignedAt = current ? current.assigned_at : null;
const progressPercent = current
? computeWorkerProgressPercent(w, current.current_job_keys, current.assigned_at)
: null;
let elapsedSeconds = null;
if (assignedAt) {
const assignedMs = Date.parse(String(assignedAt) + 'Z');
if (Number.isFinite(assignedMs)) {
elapsedSeconds = Math.max(0, Math.floor((Date.now() - assignedMs) / 1000));
}
}
return {
...w,
fresh: w.fresh === 1,
assigned_here: w.assigned_here === 1,
active: w.active === 1,
current_chunk: current ? current.current_job_id : null,
current_job_id: current ? current.current_job_id : null,
current_job_keys: current ? current.current_job_keys : null,
current_job_start_hex: current ? current.current_job_start_hex : null,
current_job_end_hex: current ? current.current_job_end_hex : null,
assigned_at: current ? current.assigned_at : null,
heartbeat_at: current ? current.heartbeat_at : null,
current_vchunk_run_start: current ? current.current_vchunk_run_start : null,
current_vchunk_run_end: current ? current.current_vchunk_run_end : null,
current_vchunk_run: current ? current.current_vchunk_run : null,
current_job_elapsed_seconds: elapsedSeconds,
current_job_progress_percent: progressPercent,
};
});
5.6 Add top-level stats timing fields
In the final res.json({...}), add:
target_minutes: TARGET_MINUTES,
timeout_minutes: TIMEOUT_MINUTES,
active_minutes: ACTIVE_MINUTES,
- Exact replacement code block for helpers.js
Replace the full file with this
'use strict';
const Database = require('better-sqlite3');
const {
seedVirtualChunks,
seedSectors,
defaultAllocSeedForPuzzle,
chooseDefaultVirtualChunkSize,
ALLOC_STRATEGY_LEGACY,
ALLOC_STRATEGY_VCHUNKS,
} = require('../server');
function createTestDb() {
const db = new Database(':memory:');
db.pragma('journal_mode = WAL');
db.exec(`
CREATE TABLE puzzles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
start_hex TEXT NOT NULL,
end_hex TEXT NOT NULL,
active INTEGER NOT NULL DEFAULT 0,
test_start_hex TEXT,
test_end_hex TEXT,
alloc_strategy TEXT,
alloc_seed TEXT,
alloc_cursor INTEGER NOT NULL DEFAULT 0,
virtual_chunk_size_keys TEXT,
virtual_chunk_count INTEGER,
bootstrap_stage INTEGER NOT NULL DEFAULT 0
);
CREATE TABLE workers (
name TEXT PRIMARY KEY,
hashrate REAL,
last_seen DATETIME DEFAULT CURRENT_TIMESTAMP,
version TEXT,
min_chunk_keys TEXT,
chunk_quantum_keys TEXT
);
CREATE TABLE chunks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
puzzle_id INTEGER,
start_hex TEXT,
end_hex TEXT,
status TEXT,
worker_name TEXT,
prev_worker_name TEXT,
assigned_at DATETIME,
heartbeat_at DATETIME,
found_key TEXT,
found_address TEXT,
is_test INTEGER NOT NULL DEFAULT 0,
sector_id INTEGER,
alloc_block_id INTEGER,
vchunk_start INTEGER,
vchunk_end INTEGER
);
CREATE INDEX idx_chunks_puzzle_status ON chunks (puzzle_id, status);
CREATE INDEX idx_chunks_vchunk_span ON chunks (puzzle_id, vchunk_start, vchunk_end, status);
CREATE TABLE sectors (
id INTEGER PRIMARY KEY AUTOINCREMENT,
puzzle_id INTEGER NOT NULL,
start_hex TEXT NOT NULL,
end_hex TEXT NOT NULL,
current_hex TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'open'
);
CREATE INDEX idx_sectors_puzzle_status ON sectors (puzzle_id, status);
CREATE INDEX idx_sectors_puzzle_id ON sectors (puzzle_id, id);
CREATE UNIQUE INDEX idx_sectors_unique_span ON sectors (puzzle_id, start_hex, end_hex);
CREATE TABLE findings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
chunk_id INTEGER NOT NULL,
worker_name TEXT NOT NULL,
found_key TEXT NOT NULL,
found_address TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE UNIQUE INDEX idx_findings_dedup ON findings (chunk_id, worker_name, found_key);
`);
return db;
}
/**
* Seed an active puzzle and return its row.
*
* opts.strategy defaults to ALLOC_STRATEGY_VCHUNKS.
* Pass { strategy: ALLOC_STRATEGY_LEGACY } for legacy sector allocator.
*/
function seedPuzzle(db, opts = {}) {
const name = opts.name || 'Test Puzzle';
const start = opts.start_hex || '0'.repeat(64);
const end = opts.end_hex || '000000000000000000000000000000000000000000000000000000003b9aca00';
const strategy = opts.strategy || ALLOC_STRATEGY_VCHUNKS;
const seed = opts.seed || defaultAllocSeedForPuzzle({ name, start_hex: start, end_hex: end }, strategy);
let virtualChunkSizeKeys = null;
if (strategy === ALLOC_STRATEGY_VCHUNKS) {
const range = BigInt('0x' + end) - BigInt('0x' + start);
if (opts.virtual_chunk_size_keys !== undefined) {
virtualChunkSizeKeys = BigInt(opts.virtual_chunk_size_keys);
if (virtualChunkSizeKeys > range) virtualChunkSizeKeys = range;
} else {
virtualChunkSizeKeys = chooseDefaultVirtualChunkSize(range);
}
}
const info = db.prepare(`
INSERT INTO puzzles (
name, start_hex, end_hex, active,
alloc_strategy, alloc_seed, alloc_cursor,
virtual_chunk_size_keys, bootstrap_stage
)
VALUES (?, ?, ?, 1, ?, ?, 0, ?, 0)
`).run(
name,
start,
end,
strategy,
seed,
virtualChunkSizeKeys ? virtualChunkSizeKeys.toString() : null
);
const id = info.lastInsertRowid;
if (strategy === ALLOC_STRATEGY_VCHUNKS) {
seedVirtualChunks(db, id, start, end, seed, virtualChunkSizeKeys);
} else {
seedSectors(db, id, start, end);
}
return db.prepare("SELECT * FROM puzzles WHERE id = ?").get(id);
}
module.exports = { createTestDb, seedPuzzle };
- Exact replacement code blocks for failing test sections
7.1 Replace the full heartbeat describe block
Replace this section:
describe('POST /api/v1/heartbeat', () => {
...
});
With this:
describe('POST /api/v1/heartbeat', () => {
test('returns 400 when name or job_id missing', async () => {
await request(app).post('/api/v1/heartbeat').send({ name: 'w1' }).expect(400);
await request(app).post('/api/v1/heartbeat').send({ job_id: 1 }).expect(400);
});
test('updates heartbeat_at on a valid job and keeps assigned_at unchanged', async () => {
seedPuzzle(db);
const r = await request(app)
.post('/api/v1/work')
.send({ name: 'w1', hashrate: 1000000 });
const jobId = r.body.job_id;
const before = db.prepare("SELECT assigned_at, heartbeat_at FROM chunks WHERE id=?").get(jobId);
await new Promise(res => setTimeout(res, 1100));
await request(app)
.post('/api/v1/heartbeat')
.send({ name: 'w1', job_id: jobId })
.expect(200, { ok: true });
const after = db.prepare("SELECT assigned_at, heartbeat_at FROM chunks WHERE id=?").get(jobId);
expect(after.assigned_at).toBe(before.assigned_at);
expect(after.heartbeat_at >= before.heartbeat_at).toBe(true);
});
test('heartbeat does not update chunk owned by another worker', async () => {
seedPuzzle(db);
const r = await request(app)
.post('/api/v1/work')
.send({ name: 'w1', hashrate: 1000000 });
const jobId = r.body.job_id;
const before = db.prepare("SELECT assigned_at, heartbeat_at FROM chunks WHERE id=?").get(jobId);
await request(app)
.post('/api/v1/heartbeat')
.send({ name: 'w2', job_id: jobId })
.expect(200, { ok: true });
const after = db.prepare("SELECT assigned_at, heartbeat_at FROM chunks WHERE id=?").get(jobId);
expect(after.assigned_at).toBe(before.assigned_at);
expect(after.heartbeat_at).toBe(before.heartbeat_at);
});
});
7.2 Replace the stale telemetry tests inside GET /api/v1/stats
Replace these tests:
- stats response includes target_minutes, timeout_minutes, active_minutes
- worker with assigned chunk exposes current_job fields and heartbeat_at
- worker without assigned chunk has null current_job fields
With this block:
test('stats response includes target_minutes, timeout_minutes, active_minutes', async () => {
const res = await request(app).get('/api/v1/stats').expect(200);
expect(typeof res.body.target_minutes).toBe('number');
expect(typeof res.body.timeout_minutes).toBe('number');
expect(typeof res.body.active_minutes).toBe('number');
});
test('worker with assigned chunk exposes current job telemetry', async () => {
seedPuzzle(db);
await request(app).post('/api/v1/work').send({ name: 'w1', hashrate: 1000000 });
const res = await request(app).get('/api/v1/stats').expect(200);
const w = res.body.workers[0];
expect(typeof w.current_job_id).toBe('number');
expect(typeof w.current_job_keys).toBe('string');
expect(BigInt(w.current_job_keys)).toBeGreaterThan(0n);
expect(typeof w.current_job_start_hex).toBe('string');
expect(typeof w.current_job_end_hex).toBe('string');
expect(typeof w.assigned_at).toBe('string');
expect(typeof w.heartbeat_at).toBe('string');
expect(typeof w.current_job_elapsed_seconds === 'number' || w.current_job_elapsed_seconds === null).toBe(true);
expect(typeof w.current_job_progress_percent === 'number' || w.current_job_progress_percent === null).toBe(true);
});
test('worker without assigned chunk has null current job telemetry', async () => {
seedPuzzle(db);
const r = await request(app).post('/api/v1/work').send({ name: 'w1', hashrate: 1000000 });
db.prepare(`
UPDATE chunks
SET status='reclaimed', prev_worker_name=worker_name, worker_name=NULL
WHERE id=?
`).run(r.body.job_id);
const res = await request(app).get('/api/v1/stats').expect(200);
const w = res.body.workers[0];
expect(w.current_job_id).toBeNull();
expect(w.current_job_keys).toBeNull();
expect(w.current_job_start_hex).toBeNull();
expect(w.current_job_end_hex).toBeNull();
expect(w.assigned_at).toBeNull();
expect(w.heartbeat_at).toBeNull();
expect(w.current_job_elapsed_seconds).toBeNull();
expect(w.current_job_progress_percent).toBeNull();
});
7.3 Add a new progress telemetry test
Add this test inside GET /api/v1/stats:
test('worker progress percent is present for assigned worker', async () => {
seedPuzzle(db);
await request(app).post('/api/v1/work').send({ name: 'w1', hashrate: 1000000 });
const res = await request(app).get('/api/v1/stats').expect(200);
const w = res.body.workers[0];
expect(w.current_job_progress_percent === null || typeof w.current_job_progress_percent === 'number').toBe(true);
if (typeof w.current_job_progress_percent === 'number') {
expect(w.current_job_progress_percent).toBeGreaterThanOrEqual(0);
expect(w.current_job_progress_percent).toBeLessThanOrEqual(100);
}
});
7.4 Rewrite current-vchunk-run test to prefer raw numeric fields
Replace:
test('current_vchunk_run shows correct range string for assigned worker', async () => {
...
});
With:
test('assigned worker exposes numeric current vchunk run fields and formatted string', async () => {
const end = (6000n).toString(16).padStart(64, '0');
seedPuzzle(db, { start_hex: '0'.repeat(64), end_hex: end, virtual_chunk_size_keys: 600 });
await request(app).post('/api/v1/work').send({ name: 'w1', hashrate: 1 });
const res = await request(app).get('/api/v1/stats').expect(200);
const w1 = res.body.workers.find(w => w.name === 'w1');
expect(typeof w1.current_vchunk_run_start).toBe('number');
expect(typeof w1.current_vchunk_run_end).toBe('number');
expect(w1.current_vchunk_run_end).toBeGreaterThan(w1.current_vchunk_run_start);
expect(typeof w1.current_vchunk_run).toBe('string');
expect(w1.current_vchunk_run).toMatch(/^\d+\.\.\d+$/);
});
7.5 Rewrite “current_vchunk_run and finders consistency” test
Replace:
test('current_vchunk_run and finders vchunk fields are consistent', async () => {
...
});
with:
test('worker current vchunk numeric fields and finder vchunk fields are consistent', async () => {
const end = (6000n).toString(16).padStart(64, '0');
seedPuzzle(db, { start_hex: '0'.repeat(64), end_hex: end, virtual_chunk_size_keys: 600 });
const r = await request(app).post('/api/v1/work').send({ name: 'w1', hashrate: 1 });
const job_id = r.body.job_id;
const statsBeforeSubmit = await request(app).get('/api/v1/stats');
const worker = statsBeforeSubmit.body.workers.find(w => w.name === 'w1');
expect(worker.current_vchunk_run_start).toBe(5);
expect(worker.current_vchunk_run_end).toBe(6);
expect(worker.current_vchunk_run).toBe('5..5');
await request(app).post('/api/v1/submit')
.send({ name: 'w1', job_id, status: 'FOUND', findings: [{ found_key: '0'.repeat(64) }] });
const statsAfterSubmit = await request(app).get('/api/v1/stats');
const finder = statsAfterSubmit.body.finders[0];
expect(finder.vchunk_start).toBe(5);
expect(finder.vchunk_end).toBe(6);
});
7.6 Fix legacy allocator tests to explicitly request legacy strategy
Replace this:
With this:
seedPuzzle(db, { strategy: ALLOC_STRATEGY_LEGACY });
Apply this to any test that is clearly about:
- sectors
- sharded frontier allocator
- sequential sector behavior
- legacy exhaustion
Especially in these tests:
-
- exhaustion: 503 when all sectors are done
- any other test in Sharded Frontier Allocator that intends to test sectors
Example exact replacement:
test('3. exhaustion: 503 when all sectors are done', async () => {
const end = (100n).toString(16).padStart(64, '0');
seedPuzzle(db, { start_hex: '0'.repeat(64), end_hex: end, strategy: ALLOC_STRATEGY_LEGACY });
await request(app).post('/api/v1/work').send({ name: 'w1', hashrate: 1 }).expect(200);
const r = await request(app).post('/api/v1/work').send({ name: 'w2', hashrate: 1 });
expect(r.status).toBe(503);
expect(r.body.error).toMatch(/all keyspace/i);
});
- Additional tests to add
8.1 assigned_at and heartbeat_at initialized on assignment
Add under /api/v1/work:
test('assigned chunk initializes assigned_at and heartbeat_at', async () => {
seedPuzzle(db);
const res = await request(app)
.post('/api/v1/work')
.send({ name: 'w1', hashrate: 1000000 })
.expect(200);
const chunk = db.prepare("SELECT assigned_at, heartbeat_at FROM chunks WHERE id = ?").get(res.body.job_id);
expect(typeof chunk.assigned_at).toBe('string');
expect(typeof chunk.heartbeat_at).toBe('string');
});
8.2 top-level timing values in stats are stable
Add under /api/v1/stats:
test('stats timing fields match configured constants', async () => {
const res = await request(app).get('/api/v1/stats').expect(200);
expect(res.body.target_minutes).toBeDefined();
expect(res.body.timeout_minutes).toBeDefined();
expect(res.body.active_minutes).toBeDefined();
});
- Final expected result after patch
After applying this patch plan:
- test DB schema matches real backend much more closely
- obsolete allocator-order table is gone
- heartbeat telemetry becomes real and testable
- /api/v1/stats exposes the fields needed by the dashboard
- current-run tests become more robust by using numeric fields
- legacy allocator tests explicitly test the legacy allocator
- the suite becomes aligned with both:
- current pool behavior
- upcoming dashboard telemetry needs
- Implementation order for developer
Recommended order:
-
Backend schema migration
- add heartbeat_at
- normalize old rows
-
Backend runtime changes
- set heartbeat_at on assignment
- update only heartbeat_at on /heartbeat
- add computeWorkerProgressPercent()
- extend /api/v1/stats
-
Test helper cleanup
-
Test rewrites
- heartbeat block
- stale stats telemetry block
- current vchunk run tests
- legacy allocator explicit seeding
-
Run full test suite
- fix any contract drift discovered during run
-
Success criteria
Patch is complete when:
- all tests pass
- no test references alloc_order_vchunks
- no test depends on fields absent from backend
- /api/v1/stats returns:
- timing fields
- current job telemetry
- vchunk numeric run fields
- progress percent
- heartbeat semantics are clean:
- assigned_at = assignment time
- heartbeat_at = liveness time
Puzzpool backend patch plan for server.js
Goal
Implement the missing backend telemetry needed by the dashboard and align the API contract with the updated test suite.
This patch plan covers:
- exact schema additions
- exact runtime logic changes
- exact telemetry fields to expose in
/api/v1/stats
- where to add
computeWorkerProgressPercent()
- exact code blocks to replace
1. What this patch changes
New telemetry model
We distinguish clearly between:
This enables:
- correct worker progress estimation
- cleaner stale-worker diagnostics
- future reclaim logic improvements
- better dashboard rendering
New /api/v1/stats worker fields
Each visible worker should expose:
current_job_id
current_job_keys
current_job_start_hex
current_job_end_hex
assigned_at
heartbeat_at
current_vchunk_run_start
current_vchunk_run_end
current_vchunk_run
current_job_elapsed_seconds
current_job_progress_percent
New top-level /api/v1/stats fields
target_minutes
timeout_minutes
active_minutes
2. Schema migration changes
2.1 Add heartbeat_at to chunks
Add this migration line in production bootstrap
Find the area with the other ALTER TABLE chunks ADD COLUMN ... migrations and add:
try { db.prepare("ALTER TABLE chunks ADD COLUMN heartbeat_at DATETIME").run(); } catch (_) {}
2.2 Backfill existing rows
Right after the ALTER TABLE migrations, add:
try {
db.prepare(`
UPDATE chunks
SET heartbeat_at = assigned_at
WHERE heartbeat_at IS NULL
AND assigned_at IS NOT NULL
`).run();
} catch (_) {}
This keeps old rows usable immediately.
- Add helper: computeWorkerProgressPercent()
Where to add it
Add it in the pure helpers section, near:
- normalizeHashrate
- ceilDiv
- roundUpToQuantum
A good place is right after roundUpToQuantum().
Add this exact function
function computeWorkerProgressPercent(worker, currentJobKeys, assignedAtIso) {
if (!worker || !assignedAtIso) return null;
const hashrate = Number(worker.hashrate);
if (!Number.isFinite(hashrate) || hashrate <= 0) return null;
let totalKeys;
try {
totalKeys = BigInt(currentJobKeys);
} catch (_) {
return null;
}
if (totalKeys <= 0n) return null;
const assignedMs = Date.parse(String(assignedAtIso) + 'Z');
if (!Number.isFinite(assignedMs)) return null;
const elapsedSeconds = Math.max(0, (Date.now() - assignedMs) / 1000);
const estimatedDone = BigInt(Math.floor(hashrate * elapsedSeconds));
let pct = Number((estimatedDone * 10000n) / totalKeys) / 100;
if (!Number.isFinite(pct)) return null;
if (pct < 0) pct = 0;
if (pct > 100) pct = 100;
return pct;
}
- Assignment-time telemetry changes
4.1 Update stmtInsertChunk
Find this block
const stmtInsertChunk = db.prepare(`
INSERT INTO chunks (
puzzle_id, start_hex, end_hex, status,
worker_name, assigned_at, is_test,
sector_id, alloc_block_id,
vchunk_start, vchunk_end
) VALUES (?, ?, ?, 'assigned', ?, CURRENT_TIMESTAMP, 0, NULL, NULL, ?, ?)
`);
Replace it with:
const stmtInsertChunk = db.prepare(`
INSERT INTO chunks (
puzzle_id, start_hex, end_hex, status,
worker_name, assigned_at, heartbeat_at, is_test,
sector_id, alloc_block_id,
vchunk_start, vchunk_end
) VALUES (?, ?, ?, 'assigned', ?, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, 0, NULL, NULL, ?, ?)
`);
4.2 Update stmtTestChunkReclaim
Find this block
const stmtTestChunkReclaim = db.prepare(`
UPDATE chunks
SET status = 'assigned', worker_name = ?, assigned_at = CURRENT_TIMESTAMP
WHERE id = (
SELECT id
FROM chunks
WHERE puzzle_id = ? AND start_hex = ? AND end_hex = ? AND is_test = 1 AND status = 'reclaimed'
LIMIT 1
)
RETURNING *
`);
Replace it with
const stmtTestChunkReclaim = db.prepare(`
UPDATE chunks
SET status = 'assigned',
worker_name = ?,
assigned_at = CURRENT_TIMESTAMP,
heartbeat_at = CURRENT_TIMESTAMP
WHERE id = (
SELECT id
FROM chunks
WHERE puzzle_id = ? AND start_hex = ? AND end_hex = ? AND is_test = 1 AND status = 'reclaimed'
LIMIT 1
)
RETURNING *
`);
4.3 Update stmtTestChunkInsert
Find this block
const stmtTestChunkInsert = db.prepare(`
INSERT INTO chunks (puzzle_id, start_hex, end_hex, status, worker_name, assigned_at, is_test)
VALUES (?, ?, ?, 'assigned', ?, CURRENT_TIMESTAMP, 1)
RETURNING *
`);
Replace it with
const stmtTestChunkInsert = db.prepare(`
INSERT INTO chunks (
puzzle_id, start_hex, end_hex, status,
worker_name, assigned_at, heartbeat_at, is_test
)
VALUES (?, ?, ?, 'assigned', ?, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, 1)
RETURNING *
`);
4.4 Update reclaimed production chunk reassignment in /api/v1/work
Find this block
const reclaimed = isReactivating ? null : db.prepare(`
UPDATE chunks
SET status = 'assigned', worker_name = ?, assigned_at = CURRENT_TIMESTAMP
WHERE id = (
SELECT id
FROM chunks
WHERE status = 'reclaimed' AND puzzle_id = ? AND is_test = 0
LIMIT 1
)
RETURNING *
`).get(name, puzzle.id);
Replace it with
const reclaimed = isReactivating ? null : db.prepare(`
UPDATE chunks
SET status = 'assigned',
worker_name = ?,
assigned_at = CURRENT_TIMESTAMP,
heartbeat_at = CURRENT_TIMESTAMP
WHERE id = (
SELECT id
FROM chunks
WHERE status = 'reclaimed' AND puzzle_id = ? AND is_test = 0
LIMIT 1
)
RETURNING *
`).get(name, puzzle.id);
4.5 Update legacy allocator fresh insert
Inside assignLegacyRandomChunk, find:
const info = db.prepare(`
INSERT INTO chunks (
puzzle_id, start_hex, end_hex, status,
worker_name, assigned_at, is_test,
sector_id, alloc_block_id, vchunk_start, vchunk_end
) VALUES (?, ?, ?, 'assigned', ?, CURRENT_TIMESTAMP, 0, ?, NULL, NULL, NULL)
`).run(puzzle.id, startHex, endHex, name, sector.id);
Replace with:
const info = db.prepare(`
INSERT INTO chunks (
puzzle_id, start_hex, end_hex, status,
worker_name, assigned_at, heartbeat_at, is_test,
sector_id, alloc_block_id, vchunk_start, vchunk_end
) VALUES (?, ?, ?, 'assigned', ?, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, 0, ?, NULL, NULL, NULL)
`).run(puzzle.id, startHex, endHex, name, sector.id);
- Heartbeat endpoint change
Goal
Heartbeat should update only:
- worker last_seen
- chunk heartbeat_at
It should not overwrite assigned_at.
Find this current block in /api/v1/heartbeat
db.prepare(`
UPDATE chunks SET assigned_at = CURRENT_TIMESTAMP
WHERE id = ? AND worker_name = ? AND status = 'assigned'
`).run(job_id, name);
Replace with
db.prepare(`
UPDATE chunks
SET heartbeat_at = CURRENT_TIMESTAMP
WHERE id = ? AND worker_name = ? AND status = 'assigned'
`).run(job_id, name);
- Extend /api/v1/stats assigned-worker telemetry
6.1 Replace assignedNow query
Find this block
const assignedNow = puzzle ? db.prepare(`
SELECT c.id, c.worker_name, c.vchunk_start, c.vchunk_end
FROM chunks c
WHERE c.status = 'assigned' AND c.puzzle_id = ? AND c.is_test = 0
`).all(puzzle.id) : [];
Replace with
const assignedNow = puzzle ? db.prepare(`
SELECT
c.id,
c.worker_name,
c.start_hex,
c.end_hex,
c.assigned_at,
c.heartbeat_at,
c.vchunk_start,
c.vchunk_end
FROM chunks c
WHERE c.status = 'assigned' AND c.puzzle_id = ? AND c.is_test = 0
`).all(puzzle.id) : [];
6.2 Replace worker mapping block
Find this block
const workerChunkMap = {};
const workerRunMap = {};
for (const c of assignedNow) {
workerChunkMap[c.worker_name] = c.id;
workerRunMap[c.worker_name] = (c.vchunk_start !== null && c.vchunk_end !== null)
? `${c.vchunk_start}..${c.vchunk_end - 1}`
: null;
}
const workers = visibleWorkers.map(w => ({
...w,
fresh: w.fresh === 1,
assigned_here: w.assigned_here === 1,
active: w.active === 1,
current_chunk: workerChunkMap[w.name] ?? null,
current_vchunk_run: workerRunMap[w.name] ?? null,
}));
Replace with
const workerChunkMap = {};
for (const c of assignedNow) {
const currentJobKeys = (BigInt('0x' + c.end_hex) - BigInt('0x' + c.start_hex)).toString();
workerChunkMap[c.worker_name] = {
current_job_id: c.id,
current_job_keys: currentJobKeys,
current_job_start_hex: c.start_hex,
current_job_end_hex: c.end_hex,
assigned_at: c.assigned_at,
heartbeat_at: c.heartbeat_at,
current_vchunk_run_start: c.vchunk_start ?? null,
current_vchunk_run_end: c.vchunk_end ?? null,
current_vchunk_run:
(c.vchunk_start !== null && c.vchunk_end !== null)
? `${c.vchunk_start}..${c.vchunk_end - 1}`
: null,
};
}
const workers = visibleWorkers.map(w => {
const current = workerChunkMap[w.name] || null;
let elapsedSeconds = null;
if (current?.assigned_at) {
const assignedMs = Date.parse(String(current.assigned_at) + 'Z');
if (Number.isFinite(assignedMs)) {
elapsedSeconds = Math.max(0, Math.floor((Date.now() - assignedMs) / 1000));
}
}
const progressPercent = current
? computeWorkerProgressPercent(w, current.current_job_keys, current.assigned_at)
: null;
return {
...w,
fresh: w.fresh === 1,
assigned_here: w.assigned_here === 1,
active: w.active === 1,
current_chunk: current ? current.current_job_id : null,
current_job_id: current ? current.current_job_id : null,
current_job_keys: current ? current.current_job_keys : null,
current_job_start_hex: current ? current.current_job_start_hex : null,
current_job_end_hex: current ? current.current_job_end_hex : null,
assigned_at: current ? current.assigned_at : null,
heartbeat_at: current ? current.heartbeat_at : null,
current_vchunk_run_start: current ? current.current_vchunk_run_start : null,
current_vchunk_run_end: current ? current.current_vchunk_run_end : null,
current_vchunk_run: current ? current.current_vchunk_run : null,
current_job_elapsed_seconds: elapsedSeconds,
current_job_progress_percent: progressPercent,
};
});
- Add top-level stats timing fields
Find final res.json({ ... })
Inside that object, add these exact fields near the top:
target_minutes: TARGET_MINUTES,
timeout_minutes: TIMEOUT_MINUTES,
active_minutes: ACTIVE_MINUTES,
A good placement is right after:
stage: process.env.STAGE || 'PROD',
So it becomes:
res.json({
stage: process.env.STAGE || 'PROD',
target_minutes: TARGET_MINUTES,
timeout_minutes: TIMEOUT_MINUTES,
active_minutes: ACTIVE_MINUTES,
puzzles: allPuzzles,
...
});
- Optional but strongly recommended: progress robustness
The current progress estimate uses assigned_at, which is good enough for now.
For a better long-term telemetry model, I recommend later adding:
- job_keys_expected
- reported_hashrate_at_assignment
- last_progress_sample_at
- last_progress_estimate_keys
But this is not required for the current dashboard.
For now, assigned_at + hashrate + current_job_keys is enough.
9. Recommended additional logging
Optional but useful while validating:
In /api/v1/work, after assignment
After you resolve the assigned chunk, log:
console.log(
`[Work] ${name} assigned chunk #${chunkId} ` +
`[${startHex} .. ${endHex})`
);
In /api/v1/heartbeat
After updating heartbeat:
console.log(`[Heartbeat] ${name} job #${job_id}`);
Use temporarily during rollout if needed.
- Full checklist for developer
Step 1
Add schema migration:
Step 2
Backfill:
- heartbeat_at = assigned_at where missing
Step 3
Add helper:
- computeWorkerProgressPercent()
Step 4
Update all assignment code paths so fresh assigned jobs set:
- assigned_at = CURRENT_TIMESTAMP
- heartbeat_at = CURRENT_TIMESTAMP
This includes:
- virtual chunk insert
- legacy insert
- reclaimed reassign
- test chunk insert
- test chunk reclaim
Step 5
Update /api/v1/heartbeat:
- update worker last_seen
- update chunk heartbeat_at
- do not mutate assigned_at
Step 6
Extend /api/v1/stats:
- add current assigned chunk telemetry
- add vchunk raw run fields
- add job elapsed seconds
- add job progress percent
- add top-level timing config values
Step 7
Run updated test suite
- Expected API result after patch
For a worker with an assigned job, /api/v1/stats should now include fields like:
{
"name": "bigmac.cpu",
"hashrate": 9533781,
"last_seen": "2026-04-28 10:15:00",
"version": "1.3.3",
"fresh": true,
"assigned_here": true,
"active": true,
"current_chunk": 1147,
"current_job_id": 1147,
"current_job_keys": "1174405120",
"current_job_start_hex": "0000000000000000000000000000000000000000000000747a4695418c000000",
"current_job_end_hex": "0000000000000000000000000000000000000000000000747a469541d2000000",
"assigned_at": "2026-04-28 10:15:00",
"heartbeat_at": "2026-04-28 10:18:00",
"current_vchunk_run_start": 28849887420614,
"current_vchunk_run_end": 28849887420649,
"current_vchunk_run": "28849887420614..28849887420648",
"current_job_elapsed_seconds": 180,
"current_job_progress_percent": 73.14
}
And top-level:
{
"target_minutes": 10,
"timeout_minutes": 15,
"active_minutes": 1.167
}
- Success criteria
This backend patch is complete when:
- assignment timestamps and heartbeat timestamps are separated
- /api/v1/heartbeat updates heartbeat_at only
- /api/v1/stats exposes all worker telemetry required by dashboard
- progress column can use backend-provided current_job_progress_percent
- updated tests pass cleanly
- no old test assumptions remain around assigned_at being heartbeat time
- some changes may be already done, so double-check carefully when going to edit.
What's the idea?
Test suite currently mixes old contracts, new allocator behavior, and planned-but-not-yet-implemented telemetry. A good part is still useful, but several tests are now stale and will fail against the current server.js. The biggest mismatches are around heartbeat_at, worker progress telemetry, /api/v1/stats shape, and a few legacy assumptions.
Here is the review.
Overall assessment
The suite is still a strong base. It already covers:
That is the right backbone.
But right now the tests are in three generations:
Current valid tests
These still match your backend fairly well.
Tests for old or removed concepts
These refer to fields or tables that are no longer part of the live behavior.
Tests for telemetry/UI-support fields that were discussed later, but are not yet implemented in the backend code you pasted
These should either be skipped for now or rewritten only after the backend fields exist.
helpers.js review
helpers.js is mostly fine, but there are two issues.
A. alloc_order_vchunks table is obsolete
You create:
But your current backend no longer uses a materialized allocation-order table.
It uses the affine permutation directly.
So this table is dead test scaffolding and should be removed.
B. heartbeat_at exists in test schema, but not in current backend migrations/logic
In test schema you have:
But the current backend code you pasted:
So your in-memory schema is ahead of the actual backend.
That is dangerous because tests can pass against fake schema/features that prod does not really have.
Recommendation for helpers.js
Do this:
These are conceptually good and should remain, with at most tiny adjustments.
/api/v1/work
These are good:
/api/v1/submit
These are good:
Admin/auth
These are good:
Allocator correctness
These are also good:
These are valuable regression tests and worth keeping.
A. heartbeat_at tests are stale
This test is currently wrong against your backend:
Why wrong:
So this test does not match the current implementation.
What to do
Choose one:
Option 1: keep current backend as-is
Then rewrite the test to reflect actual behavior:
Option 2: implement proper telemetry
Then keep the test idea, but update backend to:
This is the better long-term model.
B. /api/v1/stats shape tests are ahead of backend
These tests currently do not match the pasted backend:
Your current /api/v1/stats response does not include:
So these tests are for a newer telemetry contract than the backend currently returns.
Recommendation
Mark these as:
C. current_vchunk_run string-format tests are becoming brittle
Examples:
These are fragile because your frontend is moving toward:
The backend should not be forced forever to expose only a preformatted string.
Better contract
Prefer tests for raw fields:
And only optionally keep current_vchunk_run as a backward-compatible convenience field.
D. Legacy allocator tests sometimes call default virtual allocator by accident
Example:
This one seeds with:
But seedPuzzle defaults to ALLOC_STRATEGY_VCHUNKS, not legacy.
The test name and expectation talk about sectors, but the puzzle is seeded as virtual chunks.
That makes the test semantically confusing, even if it accidentally passes.
Fix
For tests about sector allocator, always do:
E. Some tests insert heartbeat_at manually into chunks
For example:
But production backend does not use heartbeat_at.
So again, test scaffolding is ahead of real code.
Keep as-is or nearly as-is
Keep:
Rewrite
Rewrite:
Remove
Remove:
Given your current project state, I would add these.
A. Progress telemetry tests
Once backend telemetry exists, add tests for:
B. Worker row consistency tests
You now use worker dashboard data heavily. Add tests that ensure:
C. Migration-safe stats tests
Very important now that you migrated old prod data:
D. Bootstrap stage progression tests
You already test 0/1/2/3 nicely. Add one more:
E. Reclaim timeout model
You have reclaim behavior, but not the periodic reclaim timer itself.
At minimum test the SQL logic separately.
I would reorganize server.test.js into these blocks:
That makes it much easier to see which failures belong to:
Right now the file is already large enough that splitting would help.
Your tests are not bad. The problem is not low quality.
The problem is that the suite has drifted slightly ahead of the backend in some telemetry areas.
The biggest mismatches are:
So the right move is not to rewrite everything.
It is to do a focused cleanup pass.
In helpers.js
Remove:
Decide on heartbeat_at:
In server.test.js
Mark for rewrite:
Fix legacy allocator tests to explicitly pass:
strategy: ALLOC_STRATEGY_LEGACYPrefer raw run fields over string-only checks:
Since your dashboard now depends more and more on real-time worker status, I would strongly recommend you do implement proper telemetry instead of dumbing the tests down.
That means:
Then the tests you already started writing become valid and useful.
Puzzpool test-suite patch plan
Goal
Bring the test suite back into full alignment with the current backend contract, while also preparing it for the next telemetry upgrade needed by the dashboard.
This patch plan covers:
helpers.js1. Summary of current mismatch
The current test suite mixes 3 generations of behavior:
Current backend behavior
These tests are still valid and should stay.
Old or obsolete internals
These should be removed.
Planned telemetry fields not yet implemented in backend
These should be implemented in backend and then tested properly.
The biggest mismatches are:
heartbeat_atexists in tests, but not in current production backend schema/logic/api/v1/statstests expect telemetry fields not yet returnedalloc_order_vchunksexists in test DB but is no longer used2. What to delete
2.1 Delete obsolete table from
helpers.jsDelete this whole block from the in-memory schema:
Reason:
3.1 Rewrite heartbeat tests
Current problem
The current tests assume:
This is the recommended future design, but the current backend pasted in the discussion does not implement it yet.
Decision
We should implement proper telemetry in backend and then keep the stronger heartbeat tests.
So:
3.2 Rewrite /api/v1/stats telemetry tests
Current problem
These tests expect fields that are not yet returned by backend:
Decision
Implement these fields in backend and keep the tests.
3.3 Rewrite worker current-run assertions to prefer raw numeric fields
Current problem
Some tests assert only string formatting like:
These are fragile.
Decision
Keep backward-compatible current_vchunk_run, but add and test:
Then only lightly test the formatted string.
3.4 Rewrite legacy allocator tests to explicitly seed legacy puzzles
Some tests talk about sectors / legacy frontier allocation, but use:
Since seedPuzzle() defaults to virtual allocator, that is semantically wrong.
All legacy allocator tests must explicitly use:
4.1 Add heartbeat_at to chunks
Why
assigned_at and heartbeat_at represent two different things:
This distinction is required for:
Schema change
Add to production bootstrap / migration:
And initialize existing rows if needed:
4.2 Persist worker job telemetry in /api/v1/stats
For every visible worker, expose:
Also add top-level config values:
These are useful for dashboard logic and for tests.
4.3 Progress calculation model
Backend should compute progress as an estimate.
Recommended formula:
Rules:
5.1 Schema migration additions
In production bootstrap area, add:
Then add normalization:
5.2 On fresh assignment and reclaim re-assignment, set heartbeat_at
Replace stmtInsertChunk with:
Replace stmtTestChunkInsert with:
Replace stmtTestChunkReclaim with:
Replace reclaimed-chunk reassign SQL in /api/v1/work with:
5.3 Update /api/v1/heartbeat to only update heartbeat_at
Replace current heartbeat chunk update with:
Do not update assigned_at here.
5.4 Add helper: computeWorkerProgressPercent()
Add this helper near the other pure helpers:
5.5 Extend worker stats query / mapping in /api/v1/stats
Add assigned chunk details query
Add this query in /api/v1/stats:
Replace the current worker maps block with:
Replace const workers = visibleWorkers.map(...) with:
5.6 Add top-level stats timing fields
In the final res.json({...}), add:
Replace the full file with this
7.1 Replace the full heartbeat describe block
Replace this section:
With this:
7.2 Replace the stale telemetry tests inside GET /api/v1/stats
Replace these tests:
With this block:
7.3 Add a new progress telemetry test
Add this test inside GET /api/v1/stats:
7.4 Rewrite current-vchunk-run test to prefer raw numeric fields
Replace:
With:
7.5 Rewrite “current_vchunk_run and finders consistency” test
Replace:
with:
7.6 Fix legacy allocator tests to explicitly request legacy strategy
Replace this:
With this:
Apply this to any test that is clearly about:
Especially in these tests:
Example exact replacement:
8.1 assigned_at and heartbeat_at initialized on assignment
Add under /api/v1/work:
8.2 top-level timing values in stats are stable
Add under /api/v1/stats:
After applying this patch plan:
Recommended order:
Backend schema migration
Backend runtime changes
Test helper cleanup
Test rewrites
Run full test suite
Success criteria
Patch is complete when:
Puzzpool backend patch plan for
server.jsGoal
Implement the missing backend telemetry needed by the dashboard and align the API contract with the updated test suite.
This patch plan covers:
/api/v1/statscomputeWorkerProgressPercent()1. What this patch changes
New telemetry model
We distinguish clearly between:
assigned_atwhen a chunk/job was assigned to a worker
heartbeat_atlast time the worker confirmed it is still alive on that job
This enables:
New
/api/v1/statsworker fieldsEach visible worker should expose:
current_job_idcurrent_job_keyscurrent_job_start_hexcurrent_job_end_hexassigned_atheartbeat_atcurrent_vchunk_run_startcurrent_vchunk_run_endcurrent_vchunk_runcurrent_job_elapsed_secondscurrent_job_progress_percentNew top-level
/api/v1/statsfieldstarget_minutestimeout_minutesactive_minutes2. Schema migration changes
2.1 Add
heartbeat_attochunksAdd this migration line in production bootstrap
Find the area with the other
ALTER TABLE chunks ADD COLUMN ...migrations and add:2.2 Backfill existing rows
Right after the ALTER TABLE migrations, add:
This keeps old rows usable immediately.
Where to add it
Add it in the pure helpers section, near:
A good place is right after roundUpToQuantum().
Add this exact function
4.1 Update stmtInsertChunk
Find this block
Replace it with:
4.2 Update stmtTestChunkReclaim
Find this block
Replace it with
4.3 Update stmtTestChunkInsert
Find this block
Replace it with
4.4 Update reclaimed production chunk reassignment in /api/v1/work
Find this block
Replace it with
4.5 Update legacy allocator fresh insert
Inside assignLegacyRandomChunk, find:
Replace with:
Goal
Heartbeat should update only:
It should not overwrite assigned_at.
Find this current block in /api/v1/heartbeat
Replace with
6.1 Replace assignedNow query
Find this block
Replace with
6.2 Replace worker mapping block
Find this block
Replace with
Find final res.json({ ... })
Inside that object, add these exact fields near the top:
A good placement is right after:
So it becomes:
The current progress estimate uses assigned_at, which is good enough for now.
For a better long-term telemetry model, I recommend later adding:
But this is not required for the current dashboard.
For now, assigned_at + hashrate + current_job_keys is enough.
9. Recommended additional logging
Optional but useful while validating:
In /api/v1/work, after assignment
After you resolve the assigned chunk, log:
In /api/v1/heartbeat
After updating heartbeat:
Use temporarily during rollout if needed.
Step 1
Add schema migration:
Step 2
Backfill:
Step 3
Add helper:
Step 4
Update all assignment code paths so fresh assigned jobs set:
This includes:
Step 5
Update /api/v1/heartbeat:
Step 6
Extend /api/v1/stats:
Step 7
Run updated test suite
For a worker with an assigned job, /api/v1/stats should now include fields like:
{ "name": "bigmac.cpu", "hashrate": 9533781, "last_seen": "2026-04-28 10:15:00", "version": "1.3.3", "fresh": true, "assigned_here": true, "active": true, "current_chunk": 1147, "current_job_id": 1147, "current_job_keys": "1174405120", "current_job_start_hex": "0000000000000000000000000000000000000000000000747a4695418c000000", "current_job_end_hex": "0000000000000000000000000000000000000000000000747a469541d2000000", "assigned_at": "2026-04-28 10:15:00", "heartbeat_at": "2026-04-28 10:18:00", "current_vchunk_run_start": 28849887420614, "current_vchunk_run_end": 28849887420649, "current_vchunk_run": "28849887420614..28849887420648", "current_job_elapsed_seconds": 180, "current_job_progress_percent": 73.14 }And top-level:
{ "target_minutes": 10, "timeout_minutes": 15, "active_minutes": 1.167 }This backend patch is complete when: