Applied AI systems, evaluation frameworks, workflow design, and prototype tooling for LLM reliability and real-world model behavior.
This GitHub contains selected public work in AI evaluation, structured analysis, conversational reliability, and applied systems design. Current repositories focus on how language models behave under real-world conditions, including multi-turn interaction, causal explanation, and workflow-oriented reliability.
Public work currently includes:
- Conversational Error Dynamics (CED) — multi-turn studies of how models reinforce, defend, correct, and sometimes re-entrench incorrect claims across conversation.
- Causal Synthesis Audit (CSA) — structured evaluation of how models construct causal explanations across domains, including omission, abstraction, and institutional attribution patterns.
This is a growing public portfolio. Additional repositories and artifacts are being prepared from prior work, with more evaluation, workflow, and applied AI material to be added over time.
- llm-reliability-research — public research artifacts, evaluation frameworks, prototype workflows, and technical documentation for LLM reliability, structured analysis, and applied AI systems.
- Website: lazzaro.ai
- Email: chris@lazzaro.ai