Hi @sanketshevkar, @niallroche, and the Accord Project community,
I am writing to express my strong interest in the GSoC 2026 project "Agentic Workflow for Drafting Templates". The goal of allowing users to generate Accord Project Templates by simply providing natural text requirements via a CLI is an exciting step forward for computable contracts.The project description mentions using an orchestrator like CrewAI. However, because legal template generation requires strict deterministic validation and robust error recovery, I would like to propose an architecture built on LangGraph. Having recently architected an autonomous, multi-agent research assistant using LangGraph, I have found its cyclic graph approach to be exceptionally reliable for workflows that require iterative self-healing and complex tool calling.
Proposed LangGraph ArchitectureInstead of a linear chain, we can model the template generation as a state machine:
Global State Management: A typed state dictionary tracking the user's natural language prompt, the drafted TemplateMark, the Concerto data model, and the current validation feedback.
The Architect Node (LLM): Parses the CLI input and drafts the initial Accord Project Template structures (model, text, and logic).
The Validator Node (Tool Calling): A deterministic node that executes tool calls to Accord's concerto and template-engine to validate the generated artifacts.
The Self-Healing Edge (Conditional Routing): If the Validator Node returns syntax or logic errors, the graph routes the error trace back to the Architect Node for correction.
This loop continues until the template compiles perfectly, ensuring the final output is always syntactically valid.Alignment with Project DeliverablesThis approach directly satisfies the core requirements:
Defined Personas: LangGraph allows for strict prompting and persona definition for the generation nodes.
Native Tool Calling: LangGraph natively supports structured tool calling, making integration with Accord Project validation tools seamless.
Model Flexibility: The architecture is model-agnostic, easily supporting the option to choose different AI models.
CLI Integration: The graph can be wrapped in a Python backend or executed via TypeScript, cleanly exposing the workflow to a natural-text CLI interface.
Questions for the Mentors :
Before I begin drafting my formal GSoC proposal, I would love to get your thoughts on a few architectural decisions:
Backend Preference: Does the community strongly prefer the orchestrator to be written entirely in TypeScript to match the broader Accord ecosystem, or is a Python-based LangGraph engine (accessed via the CLI) acceptable?
Validation Edge Cases: Are there specific edge cases in template validation (e.g., complex Concerto inheritance or specific logic constraints) that you would want the self-correction loop to prioritize testing?Looking forward to your feedback and to contributing to the Accord Project!
Best,
Akshat Gupta
Hi @sanketshevkar, @niallroche, and the Accord Project community,
I am writing to express my strong interest in the GSoC 2026 project "Agentic Workflow for Drafting Templates". The goal of allowing users to generate Accord Project Templates by simply providing natural text requirements via a CLI is an exciting step forward for computable contracts.The project description mentions using an orchestrator like CrewAI. However, because legal template generation requires strict deterministic validation and robust error recovery, I would like to propose an architecture built on LangGraph. Having recently architected an autonomous, multi-agent research assistant using LangGraph, I have found its cyclic graph approach to be exceptionally reliable for workflows that require iterative self-healing and complex tool calling.
Proposed LangGraph ArchitectureInstead of a linear chain, we can model the template generation as a state machine:
Global State Management: A typed state dictionary tracking the user's natural language prompt, the drafted TemplateMark, the Concerto data model, and the current validation feedback.
The Architect Node (LLM): Parses the CLI input and drafts the initial Accord Project Template structures (model, text, and logic).
The Validator Node (Tool Calling): A deterministic node that executes tool calls to Accord's concerto and template-engine to validate the generated artifacts.
The Self-Healing Edge (Conditional Routing): If the Validator Node returns syntax or logic errors, the graph routes the error trace back to the Architect Node for correction.
This loop continues until the template compiles perfectly, ensuring the final output is always syntactically valid.Alignment with Project DeliverablesThis approach directly satisfies the core requirements:
Defined Personas: LangGraph allows for strict prompting and persona definition for the generation nodes.
Native Tool Calling: LangGraph natively supports structured tool calling, making integration with Accord Project validation tools seamless.
Model Flexibility: The architecture is model-agnostic, easily supporting the option to choose different AI models.
CLI Integration: The graph can be wrapped in a Python backend or executed via TypeScript, cleanly exposing the workflow to a natural-text CLI interface.
Questions for the Mentors :
Before I begin drafting my formal GSoC proposal, I would love to get your thoughts on a few architectural decisions:
Backend Preference: Does the community strongly prefer the orchestrator to be written entirely in TypeScript to match the broader Accord ecosystem, or is a Python-based LangGraph engine (accessed via the CLI) acceptable?
Validation Edge Cases: Are there specific edge cases in template validation (e.g., complex Concerto inheritance or specific logic constraints) that you would want the self-correction loop to prioritize testing?Looking forward to your feedback and to contributing to the Accord Project!
Best,
Akshat Gupta