Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,6 @@
# deepevents.ai
deepevents.ai main codebase

## Scientific bounty modules

- [Scientific bounty sponsor scorecard](scientific-bounty-sponsor-scorecard/README.md) - solver-facing sponsor reliability scoring for funding proof, review responsiveness, rubric clarity, payout history, dispute handling, amendment volatility, and IP/NDA readiness.
54 changes: 54 additions & 0 deletions scientific-bounty-sponsor-scorecard/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Scientific Bounty Sponsor Scorecard

This module adds a solver-facing reliability layer for the Scientific Bounty System. It helps researchers decide whether a posted bounty is ready to accept before they invest time, while giving sponsors concrete actions to improve trust.

The scorecard is intentionally different from intake, arbitration, appeals, escrow settlement, reproducibility audit, anti-collusion, workspace privacy, and amendment-control modules. It looks at sponsor reliability and challenge readiness before solver commitment.

## What It Evaluates

- Verified funding or escrow coverage for the current prize
- Review SLA and historical sponsor responsiveness
- Rubric completeness, measurable criteria, tie-break handling, and evidence expectations
- Amendment volatility after launch and whether solvers receive material-change protections
- Payout history, average payout lag, and paid-award evidence
- Dispute responsiveness and unresolved dispute exposure
- IP/NDA clarity, including whether unpaid work stays with the solver
- Public-safe audit digest that excludes private sponsor notes and payment/KYC fields

## Run Locally

```bash
npm run check
npm test
npm run demo
```

## API

```js
import { buildSponsorScorecard } from "./src/sponsor-scorecard.js";
import sample from "./data/sample-sponsor-input.json" with { type: "json" };

const report = buildSponsorScorecard(sample, {
generatedAt: "2026-05-16T15:30:00.000Z"
});

console.log(report.summary);
```

## Outputs

- `summary`: portfolio counts and tier distribution
- `scorecards`: per-challenge score, tier, axis breakdown, findings, sponsor actions, solver guidance
- `auditDigest`: deterministic SHA-256 digest over public-safe report fields
- `sanitizedInputEcho`: redacted snapshot proving private notes and payment/KYC fields were not exported

## Demo Artifacts

- `docs/demo.svg`
- `docs/demo.mp4`
- `docs/demo.gif`

## Design Notes

The scoring axes follow established challenge-prize guidance: evaluation criteria should be measurable, judging protocols should be transparent, conflicts/NDA terms should be managed up front, and participants should understand award and eligibility conditions before doing work. See `docs/requirement-map.md` for the mapping to issue #18 and the public references reviewed.
195 changes: 195 additions & 0 deletions scientific-bounty-sponsor-scorecard/data/sample-sponsor-input.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
{
"portfolioId": "scientific-bounty-sponsor-scorecard-demo",
"asOf": "2026-05-16T15:30:00.000Z",
"sponsors": [
{
"sponsorId": "helix-biofund",
"displayName": "Helix Biofund",
"sector": "biotech",
"privateNotes": "Finance lead phone number and internal budget notes are intentionally private.",
"bankAccount": "do-not-export",
"taxId": "do-not-export",
"history": {
"completedChallenges": 9,
"cancelledAfterStart": 0,
"paidAwards": 12,
"averagePayoutLagDays": 3,
"medianFirstResponseHours": 12,
"onTimeReviewRate": 0.94,
"materialAmendmentsAfterLaunch": 1,
"disputesOpened": 1,
"disputesResolvedWithinSla": 1,
"participantFeedbackAverage": 4.8
},
"challengePosts": [
{
"challengeId": "single-cell-biomarker-2026",
"title": "Identify single-cell biomarker candidates",
"prizeUsd": 18000,
"visibility": "private",
"funding": {
"status": "escrow_verified",
"coverageRatio": 1,
"verifiedAt": "2026-05-12T09:00:00.000Z"
},
"reviewSlaDays": 7,
"expectedFirstResponseHours": 24,
"rubric": {
"criteria": [
{ "name": "biological plausibility", "weight": 35, "measurable": true },
{ "name": "model reproducibility", "weight": 30, "measurable": true },
{ "name": "clinical literature support", "weight": 20, "measurable": true },
{ "name": "documentation quality", "weight": 15, "measurable": true }
],
"evidenceRequirements": ["notebook", "dataset manifest", "model card"],
"tieBreakerDefined": true,
"judgingProtocolPublished": true
},
"timeline": {
"milestones": 3,
"submissionWindowDays": 42,
"sponsorReviewWindowDays": 7
},
"amendmentPolicy": {
"materialChangeNoticeDays": 5,
"solverWithdrawalProtected": true,
"maxMaterialAmendments": 1
},
"ipTerms": {
"solverRetainsUntilPaid": true,
"licenseOnPayout": "exclusive field-limited license",
"ndaTemplatePublished": true
},
"disputePolicy": {
"responseSlaDays": 5,
"neutralReviewerAvailable": true
}
}
]
},
{
"sponsorId": "aster-climate-lab",
"displayName": "Aster Climate Lab",
"sector": "climate",
"internalContactEmail": "private@example.invalid",
"identityDocuments": ["passport-redacted-placeholder"],
"history": {
"completedChallenges": 4,
"cancelledAfterStart": 1,
"paidAwards": 5,
"averagePayoutLagDays": 14,
"medianFirstResponseHours": 38,
"onTimeReviewRate": 0.72,
"materialAmendmentsAfterLaunch": 3,
"disputesOpened": 2,
"disputesResolvedWithinSla": 1,
"participantFeedbackAverage": 3.9
},
"challengePosts": [
{
"challengeId": "regional-forecasting-ensemble",
"title": "Regional forecasting ensemble for flood alerts",
"prizeUsd": 9000,
"visibility": "public",
"funding": {
"status": "escrow_verified",
"coverageRatio": 0.85,
"verifiedAt": "2026-05-13T12:00:00.000Z"
},
"reviewSlaDays": 14,
"expectedFirstResponseHours": 48,
"rubric": {
"criteria": [
{ "name": "forecast skill", "weight": 45, "measurable": true },
{ "name": "regional transferability", "weight": 30, "measurable": true },
{ "name": "operator handoff", "weight": 20, "measurable": true }
],
"evidenceRequirements": ["validation report", "deployment notes"],
"tieBreakerDefined": false,
"judgingProtocolPublished": true
},
"timeline": {
"milestones": 2,
"submissionWindowDays": 28,
"sponsorReviewWindowDays": 14
},
"amendmentPolicy": {
"materialChangeNoticeDays": 2,
"solverWithdrawalProtected": true,
"maxMaterialAmendments": 2
},
"ipTerms": {
"solverRetainsUntilPaid": true,
"licenseOnPayout": "non-exclusive implementation license",
"ndaTemplatePublished": false
},
"disputePolicy": {
"responseSlaDays": 10,
"neutralReviewerAvailable": true
}
}
]
},
{
"sponsorId": "quantum-north",
"displayName": "Quantum North",
"sector": "quantum",
"privateNotes": "Do not show draft acquisition notes to solvers.",
"history": {
"completedChallenges": 1,
"cancelledAfterStart": 2,
"paidAwards": 1,
"averagePayoutLagDays": 35,
"medianFirstResponseHours": 96,
"onTimeReviewRate": 0.35,
"materialAmendmentsAfterLaunch": 6,
"disputesOpened": 3,
"disputesResolvedWithinSla": 0,
"participantFeedbackAverage": 2.7
},
"challengePosts": [
{
"challengeId": "noise-reduction-kernel",
"title": "Improve quantum noise-reduction kernel",
"prizeUsd": 12000,
"visibility": "private",
"funding": {
"status": "missing",
"coverageRatio": 0,
"verifiedAt": null
},
"reviewSlaDays": null,
"expectedFirstResponseHours": 120,
"rubric": {
"criteria": [
{ "name": "accuracy", "weight": 50, "measurable": false },
{ "name": "speed", "weight": 25, "measurable": true }
],
"evidenceRequirements": [],
"tieBreakerDefined": false,
"judgingProtocolPublished": false
},
"timeline": {
"milestones": 1,
"submissionWindowDays": 14,
"sponsorReviewWindowDays": null
},
"amendmentPolicy": {
"materialChangeNoticeDays": 0,
"solverWithdrawalProtected": false,
"maxMaterialAmendments": 4
},
"ipTerms": {
"solverRetainsUntilPaid": false,
"licenseOnPayout": "assignment on submission",
"ndaTemplatePublished": false
},
"disputePolicy": {
"responseSlaDays": null,
"neutralReviewerAvailable": false
}
}
]
}
]
}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added scientific-bounty-sponsor-scorecard/docs/demo.mp4
Binary file not shown.
45 changes: 45 additions & 0 deletions scientific-bounty-sponsor-scorecard/docs/demo.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
33 changes: 33 additions & 0 deletions scientific-bounty-sponsor-scorecard/docs/requirement-map.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Requirement Map

Issue #18 describes a global research marketplace that needs sponsor challenge posting, secure participation, evaluation, arbitration, reward distribution, and IP options. This module adds a pre-commitment trust layer for solvers.

## Mapping To Issue #18

| Issue #18 area | Scorecard coverage |
| --- | --- |
| Challenge posting portal | Checks prize funding proof, timelines, rubric completeness, milestone schedule, NDA status, and IP term clarity before a challenge is marked solver-ready. |
| Evaluation criteria and scoring rubric | Scores whether criteria are measurable, weighted, evidence-backed, and include tie-break protocols. |
| Timeline and milestone deadlines | Flags missing review SLAs, unrealistic timelines, and absent sponsor-response windows. |
| Public vs private challenges and NDA support | Rates whether private challenges provide clear NDA scope and non-sensitive public summaries. |
| Submission engine trust | Gives solvers a readiness signal before creating a secure workspace or uploading private work. |
| Arbitration and reward distribution | Uses sponsor payout and dispute history as pre-commitment risk signals, without duplicating arbitration or escrow-settlement logic. |
| IP management options | Blocks or warns when IP transfer can occur before payment or when terms are missing. |

## Non-Overlap

This is not another intake gate, rubric scorer, arbitration ledger, appeals ledger, escrow settlement ledger, reproducibility audit, workspace privacy gate, anti-collusion module, or amendment-control engine. It sits before solver commitment and answers: "Is this sponsor and challenge reliable enough to spend research time on?"

## References Reviewed

- GSA Prize and Challenge Toolkit: emphasizes measurable evaluation criteria, transparent judging protocols, conflict/NDA management, and clear prize procedures.
- GSA prize competitions overview: distinguishes prize competitions from contracts/grants and describes the announce, submit, review, and award flow.
- Nesta Challenge Prizes practice guide: highlights careful prize construction, incentives, support, and attracting the right talent.

## Acceptance Signals

- Deterministic scorecards with no external service dependency.
- Clear trust tiers: `trusted`, `watch`, `hold`.
- Sponsor action list that can move a challenge toward solver readiness.
- Public-safe audit digest that redacts sensitive sponsor notes and payment/KYC fields.
- Tests for trusted, watch, and hold paths plus digest determinism and privacy redaction.
16 changes: 16 additions & 0 deletions scientific-bounty-sponsor-scorecard/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"name": "scientific-bounty-sponsor-scorecard",
"version": "0.1.0",
"private": true,
"type": "module",
"description": "Solver-facing sponsor reliability scorecards for scientific bounty challenges.",
"scripts": {
"check": "node --check src/sponsor-scorecard.js && node --check scripts/demo.js && node --check test/sponsor-scorecard.test.js",
"test": "node --test",
"demo": "node scripts/demo.js"
},
"engines": {
"node": ">=20"
},
"license": "MIT"
}
28 changes: 28 additions & 0 deletions scientific-bounty-sponsor-scorecard/scripts/demo.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import { readFile } from "node:fs/promises";
import { fileURLToPath } from "node:url";
import { dirname, join } from "node:path";
import { buildSponsorScorecard } from "../src/sponsor-scorecard.js";

const __dirname = dirname(fileURLToPath(import.meta.url));
const samplePath = join(__dirname, "..", "data", "sample-sponsor-input.json");
const sample = JSON.parse(await readFile(samplePath, "utf8"));

const report = buildSponsorScorecard(sample, {
generatedAt: sample.asOf
});

console.log("Scientific Bounty Sponsor Scorecard");
console.log(`Portfolio: ${report.portfolioId}`);
console.log(`Average score: ${report.summary.averageScore}`);
console.log(`Tier distribution: ${JSON.stringify(report.summary.byTier)}`);
console.log(`Audit digest: ${report.auditDigest}`);

for (const card of report.scorecards) {
console.log("");
console.log(`${card.tier.toUpperCase()} ${card.score} - ${card.sponsorName}: ${card.challengeTitle}`);
console.log(`Solver guidance: ${card.solverGuidance}`);
console.log("Sponsor actions:");
for (const item of card.sponsorActions.slice(0, 4)) {
console.log(`- ${item}`);
}
}
Loading