Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
62a6906
docs: add SPARQL conversion layer ideation, plan, and task breakdown
Feb 27, 2026
b8bdc76
feat(sparql): add algebra types, shared utilities, and package exports
Feb 27, 2026
3c08403
docs: broaden tasks skill validation to include non-test checks
Feb 27, 2026
f42e8a1
feat(sparql): implement IR→algebra, algebra→string, and result mapping
Feb 27, 2026
da1743b
feat(sparql): wire convenience wrappers and add golden tests
Feb 27, 2026
0ceedc5
feat(sparql): add Fuseki integration tests
Feb 27, 2026
646a3da
docs: update skills (parallel agents, review follow-ups, integration …
Feb 27, 2026
219aace
test(sparql): add Fuseki Docker setup, negative tests, and fix integr…
Feb 27, 2026
28083f1
feat(sparql): fix URI/literal FILTER, nested grouping, expand Fuseki …
Feb 27, 2026
5ad1932
docs: add review phases 10-14 to SPARQL conversion plan
Feb 27, 2026
9eda968
docs: add recommendations to open questions in phases 10-14
Feb 27, 2026
3147f48
docs: add implementation details and code examples to phases 10-14
Feb 27, 2026
09ff2e3
docs: add task breakdown and validation criteria for phases 10-14
Feb 28, 2026
fc7224a
feat: recursive nesting in result mapping (Phase 10)
Feb 28, 2026
8c81f2a
fix: NOT parenthesization, some in compounds, COUNT→HAVING (Phase 13)
Feb 28, 2026
edc6274
fix: expression projection and context path tautology (Phase 14)
Feb 28, 2026
6aec0f2
feat: Phase 12 — inline where filter lowering
Feb 28, 2026
d0dd73e
test: tighten assertions and add result-mapping unit tests (Phase 11)
Feb 28, 2026
4aba4da
fix: Fuseki bugs, operator parenthesization, literal where mapping, n…
Mar 1, 2026
e55d3ca
docs: add final review section and SPARQL algebra layer documentation
Mar 1, 2026
86c1caa
fix: resolve 6 review gaps — literal escaping, EXISTS patterns, colli…
Mar 1, 2026
6b81dc4
docs: update SPARQL algebra docs to reflect resolved gaps
Mar 1, 2026
be8092f
feat: add SparqlStore base class, replace hardcoded constants, fix ty…
Mar 2, 2026
514c79e
docs: add ideation docs for named graphs, computed expressions, and a…
Mar 2, 2026
130d4e1
docs: add development section to README tracing the full query pipeline
Mar 2, 2026
935c7ff
docs: update all documentation to reflect SparqlStore base class and …
Mar 2, 2026
725aa64
docs: tone down FusekiStore references to test-only sidenote
Mar 2, 2026
9640002
docs: add wrapup report and remove plan doc
Mar 2, 2026
b65e156
docs: comprehensive wrapup report, changeset, and updated wrapup skill
Mar 2, 2026
63ee2c8
docs: clarify SPARQL subquery limitation does not affect DSL sub-selects
Mar 2, 2026
0681be8
docs: add PR reference and doc links to wrapup skill report guidelines
Mar 2, 2026
8d71120
docs: clarify why changeset is written directly in wrapup skill
Mar 2, 2026
970f96e
docs: fix changeset CLI command name in wrapup skill
Mar 2, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions .changeset/sparql-conversion-layer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
"@_linked/core": minor
---

Add SPARQL conversion layer — compiles Linked IR queries into executable SPARQL and maps results back to typed DSL objects.

**New exports from `@_linked/core/sparql`:**

- **`SparqlStore`** — abstract base class for SPARQL-backed stores. Extend it and implement two methods to connect any SPARQL 1.1 endpoint:
```ts
import {SparqlStore} from '@_linked/core/sparql';

class MyStore extends SparqlStore {
protected async executeSparqlSelect(sparql: string): Promise<SparqlJsonResults> { /* ... */ }
protected async executeSparqlUpdate(sparql: string): Promise<void> { /* ... */ }
}
```

- **IR → SPARQL string** convenience functions (full pipeline in one call):
- `selectToSparql(query, options?)` — SelectQuery → SPARQL string
- `createToSparql(query, options?)` — CreateQuery → SPARQL string
- `updateToSparql(query, options?)` — UpdateQuery → SPARQL string
- `deleteToSparql(query, options?)` — DeleteQuery → SPARQL string

- **IR → SPARQL algebra** (for stores that want to inspect/optimize the algebra before serialization):
- `selectToAlgebra(query, options?)` — returns `SparqlSelectPlan`
- `createToAlgebra(query, options?)` — returns `SparqlInsertDataPlan`
- `updateToAlgebra(query, options?)` — returns `SparqlDeleteInsertPlan`
- `deleteToAlgebra(query, options?)` — returns `SparqlDeleteInsertPlan`

- **Algebra → SPARQL string** serialization:
- `selectPlanToSparql(plan, options?)`, `insertDataPlanToSparql(plan, options?)`, `deleteInsertPlanToSparql(plan, options?)`, `deleteWherePlanToSparql(plan, options?)`
- `serializeAlgebraNode(node)`, `serializeExpression(expr)`, `serializeTerm(term)`

- **Result mapping** (SPARQL JSON results → typed DSL objects):
- `mapSparqlSelectResult(json, query)` — handles flat/nested/aggregated results with XSD type coercion
- `mapSparqlCreateResult(uri, query)` — echoes created fields with generated URI
- `mapSparqlUpdateResult(query)` — echoes updated fields

- **All algebra types** re-exported: `SparqlTerm`, `SparqlTriple`, `SparqlAlgebraNode`, `SparqlExpression`, `SparqlSelectPlan`, `SparqlInsertDataPlan`, `SparqlDeleteInsertPlan`, `SparqlDeleteWherePlan`, `SparqlPlan`, `SparqlOptions`, etc.

**Bug fixes included:**
- Fixed `isNodeReference()` in MutationQuery.ts — nested creates with predefined IDs (e.g., `{id: '...', name: 'Bestie'}`) now correctly insert entity data instead of only creating the link.

See [SPARQL Algebra Layer docs](./documentation/sparql-algebra.md) for the full type reference, conversion rules, and store implementation guide.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ node_modules/
lib/
.claude/
.agents/
OLD
130 changes: 124 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Linked core gives you a type-safe, schema-parameterized query language and SHACL
## Linked core offers

- **Schema-Parameterized Query DSL**: TypeScript-embedded queries driven by your Shape definitions.
- **Fully Inferred Result Types**: The TypeScript return type of every query is automatically inferred from the selected paths — no manual type annotations needed. Select `p.name` and get `{id: string; name: string}[]`. Select `p.friends.name` and get nested result types. This works for all operations: select, create, update, and delete.
- **Shape Classes (SHACL)**: TypeScript classes that generate SHACL shape metadata.
- **Object-Oriented Data Operations**: Query, create, update, and delete data using the same Shape-based API.
- **Storage Routing**: `LinkedStorage` routes query objects to your configured store(s) that implement `IQuadStore`.
Expand All @@ -31,11 +32,6 @@ npm run setup
- `.claude/agents`
- `.agents/agents`

```typescript
import {Shape, LinkedStorage} from '@_linked/core';
import {linkedPackage} from '@_linked/core/utils/Package';
```

## Related packages

- `@_linked/rdf-mem-store`: in-memory RDF store that implements `IQuadStore`.
Expand All @@ -44,6 +40,128 @@ import {linkedPackage} from '@_linked/core/utils/Package';
## Documentation

- [Intermediate Representation (IR)](./documentation/intermediate-representation.md)
- [SPARQL Algebra Layer](./documentation/sparql-algebra.md)

## How Linked works — from shapes to query results

Linked turns TypeScript classes into a type-safe query pipeline. Here is the full flow, traced through a single example:

```
Shape class → DSL query → IR (AST) → Target query language → Execute → Map results
```

### 1. SHACL shapes from TypeScript classes

Shape classes use decorators to generate SHACL metadata. These shapes define the data model, drive the DSL's type safety, and can be synced to a store for runtime data validation.

```typescript
@linkedShape
export class Person extends Shape {
static targetClass = schema('Person');

@literalProperty({path: schema('name'), maxCount: 1})
get name(): string { return ''; }

@objectProperty({path: schema('knows'), shape: Person})
get friends(): ShapeSet<Person> { return null; }
}
```

### 2. Type-safe query DSL with inferred result types

The DSL uses these shape classes to provide compile-time checked queries. You cannot write a query that references a property not defined on the shape. The result type is **fully inferred** from the selected paths — no manual type annotations needed:

```typescript
// TypeScript infers: Promise<{id: string; name: string}[]>
const result = await Person.select(p => p.name);

// TypeScript infers: Promise<{id: string; friends: {id: string; name: string}[]}[]>
const nested = await Person.select(p => p.friends.name);
```

### 3. SHACL-based Intermediate Representation (IR)

The DSL compiles to a backend-agnostic AST — the [Intermediate Representation](./documentation/intermediate-representation.md). This is the contract between the DSL and any store implementation.

```json
{
"kind": "select",
"root": { "kind": "shape_scan", "shape": ".../Person", "alias": "a0" },
"projection": [
{ "alias": "a1", "expression": { "kind": "property_expr", "sourceAlias": "a0", "property": ".../name" } }
],
"resultMap": [{ "key": ".../name", "alias": "a1" }]
}
```

The IR uses full SHACL-derived URIs for shapes and properties. Any store that implements `IQuadStore` receives these IR objects and translates them into its native query language.

### 4. IR → SPARQL Algebra

For SPARQL-backed stores, the IR is converted into a formal [SPARQL algebra](./documentation/sparql-algebra.md) — a tree of typed nodes aligned with the SPARQL 1.1 specification.

```
SparqlSelectPlan {
projection: [?a0, ?a0_name]
algebra: LeftJoin(
BGP(?a0 rdf:type <Person>),
BGP(?a0 <name> ?a0_name) ← wrapped in OPTIONAL
)
}
```

Properties are wrapped in `LeftJoin` (OPTIONAL) so missing values don't eliminate result rows.

### 5. SPARQL Algebra → SPARQL string

The algebra is a plain data structure — stores can inspect or optimize it before serialization (e.g., rewriting patterns, adding graph clauses, or pruning redundant joins).

The algebra tree is then serialized into a SPARQL query string with automatic PREFIX generation:

```sparql
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
SELECT DISTINCT ?a0 ?a0_name
WHERE {
?a0 rdf:type <.../Person> .
OPTIONAL {
?a0 <.../name> ?a0_name .
}
}
```

### 6. Execute and map results

The SPARQL endpoint returns JSON results, which are mapped back into typed result objects:

```
Endpoint returns: Mapped to:
┌──────────┬──────────┐ ┌──────────────────────────────┐
│ a0 │ a0_name │ │ { id: ".../p1", name: "Semmy" } │
│ .../p1 │ "Semmy" │ → │ { id: ".../p2", name: "Moa" } │
│ .../p2 │ "Moa" │ │ ... │
└──────────┴──────────┘ └──────────────────────────────┘
```

Values are automatically coerced: `xsd:boolean` → `boolean`, `xsd:integer` → `number`, `xsd:dateTime` → `Date`. Nested traversals are grouped and deduplicated into nested result objects.

### The SparqlStore base class

`SparqlStore` handles this entire pipeline. Concrete stores only implement the transport:

```typescript
import { SparqlStore } from '@_linked/core/sparql';

class MyStore extends SparqlStore {
protected async executeSparqlSelect(sparql: string) {
// Send SPARQL to your endpoint, return JSON results
}
protected async executeSparqlUpdate(sparql: string) {
// Send SPARQL UPDATE to your endpoint
}
}
```

See the [SPARQL Algebra Layer docs](./documentation/sparql-algebra.md) for the full type reference, conversion algorithm, and store implementation guide.

## Linked Package Setup

Expand Down Expand Up @@ -478,7 +596,7 @@ All IR types are available from `@_linked/core/queries/IntermediateRepresentatio

**Store packages:**

- `@_linked/sparql-store` — SPARQL endpoint store (coming soon)
- `SparqlStore` base class — included in `@_linked/core/sparql`, extend it for any SPARQL endpoint
- `@_linked/rdf-mem-store` — in-memory RDF store

## Changelog
Expand Down
10 changes: 10 additions & 0 deletions docs/agents/skills/implementation/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,16 @@ Run only after explicit user confirmation to enter implementation mode, with an
7. Continue to next phase without pausing only if there are no deviations and no major problems.
8. If any deviation/blocker/major risk appears, pause and report.

## Parallel execution

When the plan marks phases as parallelizable, use the Task tool (or any available sub-agent spawning tool) to run them concurrently:

- **Spawn one sub-agent per independent phase** using `run_in_background: true`. Give each agent a self-contained prompt with all context it needs (file paths, types, contracts, test specifications, validation criteria).
- **Avoid file conflicts**: If two phases write to the same file, combine them into a single agent or sequence them. Different agents should own different files.
- **Shared files** (barrel exports, test config): Let each agent add its own entries. After all agents complete, verify the shared files have no duplicates or conflicts.
- **Wait and verify**: After all parallel agents finish, run a full integration check (compile + full test suite) before committing. This catches cross-agent conflicts in shared files.
- **Single commit for parallel group**: All work from a parallel group goes into one commit after integration verification passes.

## Required pause report content

- What was done
Expand Down
1 change: 1 addition & 0 deletions docs/agents/skills/plan/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ Run only when the user explicitly confirms plan mode (for example: converting id
- Small code examples
- Potential pitfalls
- Remaining unclear areas/decisions
- **Inter-component contracts**: When the architecture has separable parts (layers, modules, packages), make the contracts between them explicit — type definitions, function signatures, shared data structures. These contracts enable parallel implementation in tasks mode.
4. Mention tradeoffs only to explain why chosen paths were selected.
5. Continuously refine the plan with user feedback until it is explicitly approved for implementation.

Expand Down
26 changes: 19 additions & 7 deletions docs/agents/skills/review/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,27 +35,39 @@ After decisions are clear:
- create separate ideation docs only for very different, large deferred tasks
- assign the next available 3-digit prefix in `docs/ideas` for each new ideation doc

Then report in chat what was updated and ask the user to review those updates.
Then report in chat what was updated.
Do not create a separate review report file in this mode.

## Follow-up questions before switching modes

**After updating the plan with new phases, always ask the user implementation-specific follow-up questions before offering to switch modes.** New phases added during review are often under-specified because they came from gap analysis rather than upfront design. Proactively ask about:

- **Placement decisions**: Where should new files/configs live? (e.g. project root vs subfolder)
- **Tool/dependency choices**: Which specific library, image, or tool version to use?
- **Configuration details**: Ports, environment variables, naming conventions
- **Scope boundaries**: How thorough should tests/error messages be? What's worth the maintenance cost vs what's overkill?
- **Anything the agent is unsure about** that would affect the implementation

Only offer to switch to tasks mode after these questions are answered. This prevents wasted implementation effort from under-specified phases.


## Guardrails

- Do not perform cleanup/release tasks in this mode; use wrapup mode for that.
- Do not remove `docs/plans/<nnn>-<topic>.md` in review mode; plan removal happens in wrapup after report approval.
- If big remaining work is identified, discuss tradeoffs/solutions in chat first.
- Only convert review findings into new not-yet-completed phases/tasks after the user confirms scope and approach.
- After adding new tasks, ask the user to review the updated plan and explicitly ask whether to start implementation with the first new phase.
- For newly uncovered work, do not switch directly from review to implementation; switch to tasks mode first.
- After adding new phases/tasks, ask the user to review the updated plan and ask whether to switch to tasks mode to refine them.
- For newly uncovered work, **always switch to tasks mode first** — never directly to implementation. Tasks mode validates that phases have proper validation criteria, dependency graphs, and parallel opportunities before implementation begins.
- If the user's response to review findings involves clarifying approach or scope (e.g. "do X but not Y", "let's use approach A"), treat this as still in the clarification loop — ask follow-up questions for any remaining ambiguity before switching modes.

## Exit criteria

- Gaps are clarified with explicit user decisions (now vs future, and chosen approach where needed).
- If now-work exists, plan was updated with new phases/tasks and user was asked whether to start implementation of the first new phase.
- If now-work exists, plan was updated with new phases/tasks and user was asked whether to switch to tasks mode.
- If future-work exists, ideation docs were created according to grouping rules and user was informed.
- User has explicitly confirmed whether to:
- stay in review mode,
- switch to tasks mode,
- switch to implementation mode for approved next phase,
- switch to tasks mode (required path for any new implementation work),
- switch to ideation mode for future work,
- or move to wrapup mode.
- or move to wrapup mode (only when no new implementation work remains).
42 changes: 39 additions & 3 deletions docs/agents/skills/tasks/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,45 @@ Run only when the user explicitly confirms tasks mode.
## Steps

1. Update the active plan doc in `docs/plans/<nnn>-<topic>.md`. Task breakdown MUST be persisted in this same on-disk plan file.
2. Define ordered implementation phases.
2. Define implementation phases.
3. Define concrete tasks under each phase.
4. Add explicit validation criteria per phase (for example: unit tests, integration tests, build/typecheck commands, targeted runtime checks).
5. Ensure phases are commit-friendly (one commit per phase).
5. Write detailed test specifications for every phase (see **Test specification** below).
6. Ensure phases are commit-friendly (one commit per phase).

## Parallel execution

Phases should be designed for maximum parallelism — different agents may implement different phases or tasks concurrently.

- **Identify the dependency graph**: Which phases depend on which? Which can run in parallel? Mark this explicitly in the task breakdown.
- **Contracts first**: If the plan defines inter-component contracts (types, interfaces, shared data structures), schedule the contract/types phase first. Once contracts are established, phases that only depend on those contracts can run in parallel.
- **Stub boundaries**: When a phase depends on another phase's output, note what stubs or mocks are needed so it can proceed independently. For example: "Agent B can stub `irToAlgebra()` with hand-crafted algebra objects to test `algebraToString()` independently."
- **Mark parallel groups**: Use explicit notation in the task breakdown to indicate which phases can run simultaneously. For example: "Phase 2a, 2b, 2c can run in parallel after Phase 1."
- **Integration phase last**: After all parallel phases complete, include an explicit integration phase that: (1) replaces stubs with real wiring between components, (2) verifies all parts compile and work together, (3) runs end-to-end / golden tests that exercise the full pipeline. This phase must be planned even when stubs seem trivial — it catches type mismatches, import issues, and cross-component edge cases that unit tests miss.

## Validation specification

Every phase must include a **Validation** section that describes the checks an implementing agent must perform and pass before considering the phase complete. Validation is not limited to coded tests — it includes any check that truly proves the work is correct.

**Types of validation checks** (use whichever are appropriate for the phase):
- **Unit/integration tests**: Coded test files with named test cases and concrete assertions.
- **Compilation/type-check**: e.g. `npm run compile` passes with no errors.
- **Runtime checks**: e.g. "execute the generated SPARQL against a running store and verify results".
- **Manual structural checks**: e.g. "assert the exported function is importable from the barrel", "assert the generated file exists and contains expected content".
- **HTTP/network checks**: e.g. "POST to the endpoint and verify 200 response with expected payload".

**When describing coded tests:**
- **Name each test case** with the fixture or scenario it covers (e.g. `` `selectName` — `Person.select(p => p.name)` ``).
- **State concrete assertions** — not just "test that it works" but what specifically must be true. Use "assert" language: "assert result is array of length 4", "assert field `name` equals `'Semmy'`", "assert plan type is `'select'`".
- **Include input and expected output** where practical — hand-crafted input objects, specific field values, structural expectations (e.g. "assert algebra contains a LeftJoin wrapping the property triple").
- **Cover edge cases explicitly** — null handling, missing values, type coercion, empty inputs.
- **Specify test file paths** — e.g. `src/tests/sparql-algebra.test.ts`.

**When describing non-test validation:**
- **State the exact command or check** to run and what a passing result looks like.
- **Be specific about success criteria** — "compiles" is too vague; "`npx tsc --noEmit` exits 0 with no errors" is clear.

The validation specifications serve as the phase's acceptance criteria: a phase is only complete when all described checks pass.

## Guardrails

Expand All @@ -24,5 +59,6 @@ Run only when the user explicitly confirms tasks mode.
## Exit criteria

- Every phase has tasks and validation criteria.
- Execution order/dependencies are clear.
- Dependency graph and parallel opportunities are explicit.
- Stubs needed for parallel execution are noted.
- User has explicitly confirmed whether to switch to implementation mode or remain in tasks mode.
Loading