Predict what you would actually choose.
bcd is an open-source personalized decision prediction system.
It models what a particular person is likely to choose based on:
- stable preferences
- recent state
- past decisions
- retrieved memory
- feedback-driven adaptation
Instead of searching for the objectively correct answer, bcd asks:
Given these options, what would this person actually choose right now?
bcd is not:
- a generic "predict anything" engine
- a world simulation system
- a generic agent framework
- a production SaaS product
It is focused on personal preference modeling, memory-based reasoning, recent-state influence, and a tight feedback loop that changes future predictions.
Most AI demos stop at generic recommendation, one-shot ranking, or black-box personalization.
bcd is built around a tighter and more personal loop:
- build a user profile from onboarding, imported conversations, and reviewed signals
- keep stable profile separate from recent state
- retrieve relevant past decisions as memory
- rank options for a specific person, not a generic audience
- explain why the current choice won
- record the actual outcome and feed it back into future predictions
The result is a local demo that feels closer to a real personalized AI product than a thin ranking script.
-
Personalized prediction, not universal correctness
The target is "what this user would do," not "what is best." -
Stable profile + recent-state loop
Long-term preferences, short-term notes, feedback shift markers, and carry-over context are modeled separately. -
Memory-first reasoning
Past choices are turned into retrievable memory and reused during prediction. -
Inspectable decision reasoning
Ranked options include component scores, supporting evidence, counter-evidence, memory retrieval reasons, and a decision audit. -
Feedback actually changes the next prediction
Actual choices create memory, update snapshots, and influence future ranking. -
A polished local demo, not just raw APIs
There is a two-page browser flow for setup, prediction, inspection, and feedback.
In the current codebase, you can:
- create a user from structured onboarding
- import a ChatGPT export to bootstrap a profile
- load one of several bundled sample personas with seeded history
- jump into one-click showcase scenarios that make the reasoning loop obvious on first run
- review, accept, reject, or edit extracted profile signals
- add manual recent-state notes
- ask a decision question with 2 to 5 candidate options
- ask
bcdto suggest likely candidate options for the current question - provide optional structured context such as time of day, energy, weather, social setting, budget, and urgency
- inspect ranked options, explanations, retrieved memories, and decision audit details
- record the actual chosen option and why it differed
- update memory and preference snapshots through feedback
- switch between
baseline,hybrid, andllmprediction modes
The browser demo is split into two focused pages:
Use this page to prepare the user model.
- create a user from onboarding
- import a ChatGPT export
- load a showcase persona with seeded history
- review and edit profile signals
- inspect stable profile vs recent-state summary
- add or remove recent-state notes
Use this page to run the decision loop.
- enter a question or situation
- try a showcase scenario that preloads a persona, prompt, context, and options
- type your own candidate options
- ask
bcdto suggest candidate options based on the active profile - add optional context only when it materially matters
- inspect the prediction result in a separate modal
- review ranked alternatives, memory evidence, and decision audit
- submit actual feedback so the system adapts
flowchart LR
A["User Setup"] --> B["Stable Profile + Signals"]
B --> C["Decision Prompt + Optional Context"]
C --> D["Suggested Options or Manual Options"]
D --> E["Memory Retrieval"]
B --> F["Personalized Ranking"]
E --> F
F --> G["Prediction + Explanation + Decision Audit"]
G --> H["Actual Choice Feedback"]
H --> I["Memory Update + Snapshot Refresh"]
I --> F
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"uvicorn bcd.api.app:app --reloadThen open:
- Open
/app/setup - Click a showcase persona or create your own user
- Move to
/app/predict - Enter a question
- Either type options yourself or click
Suggest options - Run prediction
- Inspect the reasoning
- Save actual feedback
No external infrastructure is required for the default flow.
If you prefer the terminal:
bcd-cli bootstrap
bcd-cli demo
bcd-cli evaluateEquivalent helper scripts are also included:
python scripts/init_sample_data.py
python scripts/run_demo.py
python scripts/evaluate_baseline.pycurl -X POST "http://127.0.0.1:8000/profiles/bootstrap-sample?sample_id=alex_chen"curl http://127.0.0.1:8000/demo/showcasecurl -X POST http://127.0.0.1:8000/decisions/suggest-options \
-H "Content-Type: application/json" \
-d '{
"user_id": "sample-alex",
"prompt": "Pick dinner after a tiring rainy evening.",
"category": "food",
"context": {
"time_of_day": "night",
"energy": "low",
"weather": "rainy"
},
"existing_options": ["Greasy burger"],
"max_suggestions": 4
}'curl -X POST http://127.0.0.1:8000/decisions/predict \
-H "Content-Type: application/json" \
-d '{
"user_id": "sample-alex",
"prompt": "Pick dinner after a tiring rainy evening.",
"category": "food",
"context": {
"time_of_day": "night",
"energy": "low",
"weather": "rainy",
"with": "alone"
},
"options": [
{"option_text": "Warm noodle soup"},
{"option_text": "Greasy burger"},
{"option_text": "Raw salad"}
]
}'curl -X POST http://127.0.0.1:8000/decisions/<request_id>/feedback \
-H "Content-Type: application/json" \
-d '{
"actual_option_id": "<option_id>",
"reason_text": "Wanted something warm and easy.",
"reason_tags": ["warm", "easy"],
"failure_reasons": ["context_missing"],
"context_updates": {"energy": "very_low"},
"preference_shift_note": "Rain made comfort more important."
}'The prediction result includes:
- a top predicted option
- ranked alternatives with normalized confidence
- component-level score breakdowns for each option
- supporting evidence and counter-evidence
- retrieved memories with retrieval roles and why they were retrieved
- explanation sections grounded in profile, recent state, and memory
- a decision audit with confidence label, margin, decisive factors, watchouts, adaptation signals, and active context
This makes the system inspectable enough to debug, evaluate, and extend.
bcd works fully offline from a product-dependency perspective by default.
You only need an external provider if you want LLM-assisted ranking.
Configure an OpenAI-compatible endpoint like this:
export BCD_PREDICTION_MODE=hybrid
export BCD_LLM_API_KEY=your_api_key
export BCD_LLM_BASE_URL=https://api.openai.com/v1
export BCD_LLM_MODEL=gpt-4.1-miniPrediction modes:
baseline: heuristic + memory retrieval onlyhybrid: baseline ranking blended with LLM ranking when availablellm: LLM ranking first, with baseline fallback
You can set this either through environment variables or directly inside the browser demo's advanced model settings.
-
profile
User creation, onboarding, signal review, recent-state handling, profile cards, and snapshots -
memory
Structured memory retrieval and retrieval scoring -
decision
Candidate suggestion, option scoring, confidence normalization, explanation building, and decision audit generation -
reflection
Feedback logging, memory creation, and snapshot updates -
storage
SQLModel tables, SQLite persistence, and repository access -
api
FastAPI app and local browser demo -
evaluation
Sample evaluation flow for reproducible experiments -
llm
Optional provider-agnostic ranking layer
GET /demo/showcasePOST /profiles/bootstrap-sampleGET /profiles/onboarding-questionnairePOST /profiles/onboardPOST /profiles/onboard/previewPOST /profiles/import-chatgpt-exportGET /profiles/{user_id}GET /profiles/{user_id}/cardGET /profiles/{user_id}/signalsPOST /profiles/{user_id}/signals/{signal_id}/reviewGET /profiles/{user_id}/recent-statePOST /profiles/{user_id}/recent-stateDELETE /profiles/{user_id}/recent-state/{note_id}POST /decisions/suggest-optionsPOST /decisions/predictPOST /decisions/{request_id}/feedbackGET /users/{user_id}/historyGET /users/{user_id}/memories
bcd/
├─ README.md
├─ bcd.md
├─ docs/
├─ data/
├─ demo/
├─ scripts/
├─ src/bcd/
│ ├─ api/
│ ├─ decision/
│ ├─ evaluation/
│ ├─ llm/
│ ├─ memory/
│ ├─ profile/
│ ├─ reflection/
│ ├─ showcase.py
│ ├─ storage/
│ └─ utils/
└─ tests/
At this stage, bcd is intentionally not:
- a production SaaS app
- an auth/billing/deployment-heavy platform
- a generic agent framework
- a simulation engine for the external world
The priority is a strong, inspectable, open-source personalized AI demo with a clean feedback loop.
If you want to extend the repo, the clearest directions are:
- richer memory retrieval backends
- more realistic temporal preference modeling
- stronger evaluation sets and synthetic users
- improved candidate suggestion generation
- confidence calibration and failure analysis
- additional profile import flows
MIT. See LICENSE.



