Objective
Define the minimum artifact contracts and evaluation discipline required before QuantLab can support learned-model research.
This is the first slice of the Neural Research Track (N.0). It must establish reproducibility, comparability, and promotion discipline before any training loops, ML libraries, or neural implementations are added.
Context
QuantLab is expanding from validating explicit rule-based strategies toward validating both explicit strategies and learned models.
This must not reposition QuantLab as an AI trading platform.
The Neural Track must strengthen QuantLab as a laboratory of evidence:
- reproducible research
- traceable datasets
- explicit feature definitions
- comparable baselines
- auditable training metadata
- controlled promotion rules
The first step is not “add PyTorch” or “train a model”.
The first step is to define what a valid learned-model experiment must emit and how it can be compared against existing QuantLab research artifacts.
Scope
Define the initial learned-model artifact contract for N.0.
At minimum, specify the expected structure and required fields for:
dataset_manifest.json
feature_manifest.json
model_config.json
training_summary.json
Define how these artifacts relate to existing QuantLab outputs, especially:
report.json
- run directories under
outputs/runs/
- future learned-model experiment directories
- comparison against rule-based strategy outputs
Define evaluation discipline for learned-model research:
- temporal split requirements
- train / validation / test separation
- random seed discipline
- dataset version or source traceability
- feature-set traceability
- target / horizon definition
- baseline comparison requirements
- leakage-prevention expectations
- minimum metadata needed to reproduce an experiment
Define non-promotion rules:
- no learned model can be promoted to paper mode without baseline comparison
- no learned model can be promoted based only on predictive accuracy
- no model output can become execution intent without downstream strategy validation
- no neural model can bypass paper, safety, broker, or supervised execution gates
Architectural Rules
QuantLab remains the evidence authority.
Learned-model research must remain inside QuantLab’s research and validation discipline.
Stepbit may later orchestrate learned-model workflows, but must not own:
- dataset definition
- feature definition
- model validation logic
- artifact contracts
- promotion criteria
Quant Pulse may later provide upstream hypotheses or signal context, but must not certify learned-model validity.
The Neural Track must not weaken:
- reproducibility
- auditability
- comparability
- operator control
- promotion discipline
- broker/execution safety boundaries
Proposed Documentation Targets
Add or update documentation such as:
docs/learned-model-artifact-contract.md
docs/quantlab-roadmap.md
docs/run-artifact-contract.md if cross-references are needed
Keep the first slice documentary unless a very small schema stub is clearly justified.
Out of Scope
Do not implement:
- training loops
- PyTorch integration
- TensorFlow integration
- sklearn baselines
- model registry
- model serving
- live inference
- paper promotion for learned models
- execution intents from model outputs
- Stepbit orchestration for model training
- Quant Pulse signal ingestion for model features
Do not modify broker, paper, or execution paths.
Do not introduce new runtime dependencies unless explicitly approved in a later implementation issue.
Done When
- The initial
N.0 learned-model artifact contract is documented.
- Required artifacts are named and their minimum fields are defined.
- Temporal split and seed discipline are documented.
- Baseline comparison expectations are documented.
- Non-promotion rules are explicit.
- The relationship between learned-model artifacts and existing QuantLab run artifacts is clear.
- The issue does not add model training implementation.
git diff --check passes.
Blocks
This should block any implementation issue for:
N.1 classical ML baselines
N.2 neural baselines
- learned-model paper promotion
- learned-model orchestration through Stepbit
Strategic Rule
Neural Track = research discipline expansion, not product repositioning.
QuantLab should validate learned models under evidence standards equal to or stricter than explicit strategies.
Objective
Define the minimum artifact contracts and evaluation discipline required before QuantLab can support learned-model research.
This is the first slice of the Neural Research Track (
N.0). It must establish reproducibility, comparability, and promotion discipline before any training loops, ML libraries, or neural implementations are added.Context
QuantLab is expanding from validating explicit rule-based strategies toward validating both explicit strategies and learned models.
This must not reposition QuantLab as an AI trading platform.
The Neural Track must strengthen QuantLab as a laboratory of evidence:
The first step is not “add PyTorch” or “train a model”.
The first step is to define what a valid learned-model experiment must emit and how it can be compared against existing QuantLab research artifacts.
Scope
Define the initial learned-model artifact contract for
N.0.At minimum, specify the expected structure and required fields for:
dataset_manifest.jsonfeature_manifest.jsonmodel_config.jsontraining_summary.jsonDefine how these artifacts relate to existing QuantLab outputs, especially:
report.jsonoutputs/runs/Define evaluation discipline for learned-model research:
Define non-promotion rules:
Architectural Rules
QuantLab remains the evidence authority.
Learned-model research must remain inside QuantLab’s research and validation discipline.
Stepbit may later orchestrate learned-model workflows, but must not own:
Quant Pulse may later provide upstream hypotheses or signal context, but must not certify learned-model validity.
The Neural Track must not weaken:
Proposed Documentation Targets
Add or update documentation such as:
docs/learned-model-artifact-contract.mddocs/quantlab-roadmap.mddocs/run-artifact-contract.mdif cross-references are neededKeep the first slice documentary unless a very small schema stub is clearly justified.
Out of Scope
Do not implement:
Do not modify broker, paper, or execution paths.
Do not introduce new runtime dependencies unless explicitly approved in a later implementation issue.
Done When
N.0learned-model artifact contract is documented.git diff --checkpasses.Blocks
This should block any implementation issue for:
N.1classical ML baselinesN.2neural baselinesStrategic Rule
Neural Track = research discipline expansion, not product repositioning.
QuantLab should validate learned models under evidence standards equal to or stricter than explicit strategies.