The "50% cheaper than Replicate/fal.ai" claim is unbacked. Decides whether Layer 1 compute marketplace is a viable revenue stream at all.
Scope
Run these three workloads on Akash, io.net, and Nosana. Compare wall-clock time and $-cost:
| Workload |
Size |
| SDXL image generation |
10 images at 1024x1024, 30 steps |
| Llama 3.1 70B inference |
1000-token completion, batch of 10 |
| LoRA fine-tune |
200-step LoRA on a 500-image dataset, SDXL base |
Comparison baselines
- Replicate on-demand pricing for each workload.
- fal.ai on-demand pricing for each workload.
- Bare-metal reference (Lambda / RunPod) if relevant.
Output
cr8-depin-benchmark.md:
- Full pricing table: provider × workload × total cost × wall time × reliability note.
- Verdict per workload: "DePIN wins by X%" / "DePIN loses by X%" / "DePIN on par".
- Overall verdict: go/no-go on the compute-marketplace narrative.
Why
gtm-review-2026-04-13.md §Use cases: "Without this, the '50% cheaper' claim is hand-waved."
- Feeds the compute-marketplace scaffold (
abhicris/CR8#16) with realistic price copy.
- Feeds grant applications (Arbitrum, EF) with defensible numbers.
Label
documentation — this is a research output, not code.
The "50% cheaper than Replicate/fal.ai" claim is unbacked. Decides whether Layer 1 compute marketplace is a viable revenue stream at all.
Scope
Run these three workloads on Akash, io.net, and Nosana. Compare wall-clock time and $-cost:
Comparison baselines
Output
cr8-depin-benchmark.md:Why
gtm-review-2026-04-13.md§Use cases: "Without this, the '50% cheaper' claim is hand-waved."abhicris/CR8#16) with realistic price copy.Label
documentation— this is a research output, not code.