An Agentic Learning OS that Understands How You Think
Big Brain is not a chatbot tutor. It is a learning operating system that builds a cognitive model of the learner, actively diagnoses misunderstandings, and adapts the learning path in real time using an interactive canvas.
Instead of passively answering questions, Big Brain probes, corrects, and reshapes understanding.
- Install deps:
npm install - Run app + API together:
npm run dev - Ensure
.envin project root includesOPENAI_QUIZ_API_KEY.
The API runs on http://localhost:8000 and Vite proxies /api.
Most AI education tools:
- generate explanations
- quiz users
- move on
Big Brain does something fundamentally different:
It learns the learner.
Big Brain builds a mental model of the user’s strengths, weaknesses, misconceptions, and learning patterns — and uses that model to drive every lesson, quiz, and interaction.
-
Initial quizzes are diagnostic, not graded
-
Questions target:
- conceptual understanding
- prerequisite gaps
- common misconception patterns
-
Supports multiple formats:
- MCQs
- short explanations
- diagram labeling
- step-by-step reasoning
Output:
- Concept mastery map
- False-confidence detection
- Weak prerequisite identification
-
Learning path is a dynamic dependency graph, not a static syllabus
-
Big Brain:
- prunes concepts the user already understands
- expands weak or fragile areas
- reorders lessons automatically as the user improves
-
Matches each concept to:
- timestamped video segments
- short explanations from multiple sources
-
Chooses content based on:
- user’s learning style
- prior errors
- abstraction preference (visual vs symbolic)
Big Brain continuously learns how the user thinks:
-
Error types (algebraic slip, intuition failure, overgeneralization)
-
Learning preferences inferred from interaction (not self-reported)
-
Retention decay and concept fragility
-
Patterns such as:
- “Struggles when variables are introduced”
- “Confuses definitions with applications”
This profile powers true personalization, not generic AI responses.
Learning happens on a canvas, not in chat.
Users can:
- annotate diagrams
- draw graphs
- write partial solutions
- highlight confusion points
- cross out incorrect ideas
Big Brain responds directly to canvas interactions:
- Corrects faulty diagrams
- Identifies incorrect reasoning steps
- Zooms into highlighted concepts
- Suggests targeted exercises
This interaction model cannot be replicated in standard chat interfaces.
Big Brain verifies understanding by asking the user to teach the concept back.
-
User explains on the canvas
-
AI:
- interrupts when logic breaks
- asks clarification questions
- challenges vague explanations
-
Concepts are marked “mastered” only after successful teach-back
This ensures real learning, not surface-level correctness.
-
User selects a topic or course
-
Big Brain runs diagnostic assessment
-
Personalized learning graph is generated
-
User learns via:
- canvas interactions
- micro-lessons
- targeted videos
-
Continuous quizzes and feedback
-
Teach-back validation
-
Retention tracking and adaptive review
No passive consumption. No dead ends.
| ChatGPT | Big Brain |
|---|---|
| Reactive answers | Proactive learning agent |
| Linear chat | Visual interactive canvas |
| Same output for everyone | Personalized cognitive model |
| No memory of misconceptions | Long-term learning profile |
| One-shot explanations | Feedback-driven mastery |
- College students learning technical subjects
- Concept-heavy courses (CS, math, physics, ML)
- Self-learners seeking actual mastery
- Educators seeking diagnostic insights
Big Brain was designed with:
- High UI/UX impact via canvas-based learning
- Visible intelligence through diagnostics and adaptation
- Agentic behavior with feedback loops and teach-back
- A future-of-learning vision aligned with AGI principles
- Collaborative canvas sessions
- Instructor dashboards
- LMS integrations
- Long-term learner memory across courses
- AR and spatial learning modes
Big Brain doesn’t just answer questions.
It understands how you think — and teaches accordingly.
npm install
npm run devOpen the local URL printed in the terminal.
- Land on the Landing page and tap “Start learning”.
- Choose a topic on /learn.
- Explore the Course tabs (Videos → Quizzes → Canvas).
- Open the Fullscreen Canvas for focused work.
- Optional: run the Diagnostic from the course page.
The app runs a Node/Express backend on port 8000 with OpenAI-powered quiz generation.
npm run dev starts both Vite and the API server.
- Install Phoenix locally:
pip install arize-phoenix && phoenix serve(defaults to http://localhost:6006). - Set
PHOENIX_COLLECTOR_ENDPOINT,PHOENIX_PROJECT_NAME, andPROMPT_VARIANTin.env(see server/.env.example). - Run the app:
npm run dev, then generate a few quizzes and submit attempts. - Open Phoenix UI at http://localhost:6006 and filter spans by
quiz.prompt_variant,quiz.topic, orquiz.score.percentageto compare variants A vs B.