Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 12 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -418,19 +418,19 @@ basic-memory tool edit-note docs/setup --operation append --content $'\n- Added

**Content Management:**
```
write_note(title, content, folder, tags) - Create or update notes
read_note(identifier, page, page_size) - Read notes by title or permalink
write_note(title, content, folder, tags, output_format="text"|"json") - Create or update notes
read_note(identifier, page, page_size, output_format="text"|"json") - Read notes by title or permalink
read_content(path) - Read raw file content (text, images, binaries)
view_note(identifier) - View notes as formatted artifacts
edit_note(identifier, operation, content) - Edit notes incrementally
move_note(identifier, destination_path) - Move notes with database consistency
delete_note(identifier) - Delete notes from knowledge base
edit_note(identifier, operation, content, output_format="text"|"json") - Edit notes incrementally
move_note(identifier, destination_path, output_format="text"|"json") - Move notes with database consistency
delete_note(identifier, output_format="text"|"json") - Delete notes from knowledge base
```

**Knowledge Graph Navigation:**
```
build_context(url, depth, timeframe) - Navigate knowledge graph via memory:// URLs
recent_activity(type, depth, timeframe) - Find recently updated information
build_context(url, depth, timeframe, output_format="json"|"text") - Navigate knowledge graph via memory:// URLs
recent_activity(type, depth, timeframe, output_format="text"|"json") - Find recently updated information
list_directory(dir_name, depth) - Browse directory contents with filtering
```

Expand All @@ -443,12 +443,15 @@ search_by_metadata(filters, limit, offset, project) - Structured frontmatter sea

**Project Management:**
```
list_memory_projects() - List all available projects
create_memory_project(project_name, project_path) - Create new projects
list_memory_projects(output_format="text"|"json") - List all available projects
create_memory_project(project_name, project_path, output_format="text"|"json") - Create new projects
get_current_project() - Show current project stats
sync_status() - Check synchronization status
```

`output_format` defaults to `"text"` for these tools, preserving current human-readable responses.
`build_context` defaults to `"json"` and can be switched to `"text"` when compact markdown output is preferred.

**Cloud Discovery (opt-in):**
```
cloud_info() - Show optional Cloud overview and setup guidance
Expand Down
22 changes: 15 additions & 7 deletions docs/mcp-ui-bakeoff-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,18 +83,26 @@ Manual check:

---

### 2) ASCII / ANSI TUI Output
### 2) Text / JSON Output Modes

Tools:
- `search_notes(output_format="ascii" | "ansi")`
- `read_note(output_format="ascii" | "ansi")`
- `search_notes(output_format="text" | "json")`
- `read_note(output_format="text" | "json")`
- `write_note(output_format="text" | "json")`
- `edit_note(output_format="text" | "json")`
- `recent_activity(output_format="text" | "json")`
- `list_memory_projects(output_format="text" | "json")`
- `create_memory_project(output_format="text" | "json")`
- `delete_note(output_format="text" | "json")`
- `move_note(output_format="text" | "json")`
- `build_context(output_format="json" | "text")`

Expect:
- ASCII table for search, header + content preview for note.
- ANSI variants include color escape codes.
- `text` mode preserves existing human-readable responses.
- `json` mode returns structured dict/list payloads for machine-readable clients.

Automated:
- `uv run pytest test-int/mcp/test_output_format_ascii_integration.py`
- `uv run pytest test-int/mcp/test_output_format_json_integration.py`

---

Expand Down Expand Up @@ -125,6 +133,6 @@ Fill in after running:

- Tool‑UI (React): __
- MCP‑UI SDK (embedded): __
- ASCII/ANSI: __
- Text/JSON modes: __

Decision + rationale: __
4 changes: 2 additions & 2 deletions docs/post-v0.18.0-test-plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ Key finding: **FastEmbed (384-d local ONNX) matches or exceeds OpenAI (1536-d) q
### Existing coverage anchor points

- `tests/mcp/test_tool_contracts.py`
- `test-int/mcp/test_output_format_ascii_integration.py`
- `test-int/mcp/test_output_format_json_integration.py`
- `test-int/mcp/test_ui_sdk_integration.py`

### Gaps to close — DONE
Expand Down Expand Up @@ -288,7 +288,7 @@ Run after automated tests pass.
- Routing: verify success/failure paths with and without API key.
- Permalink routing: read/write/search notes across projects with colliding titles.
- Permalink routing: verify memory URL routing correctness.
- UI/TUI: call `search_notes` and `read_note` with UI variants and `output_format=ascii|ansi`.
- UI/TUI: call `search_notes` and `read_note` with UI variants and `output_format=text|json`.
- UI/TUI: verify payload/resource format and metadata completeness.

## Implementation Backlog (Ordered)
Expand Down
18 changes: 12 additions & 6 deletions src/basic_memory/cli/commands/tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,11 +160,14 @@ async def _read_note_json(
search_type="title",
project=project_name,
workspace=workspace,
output_format="json",
)
if title_results and hasattr(title_results, "results") and title_results.results:
result = title_results.results[0]
if result.permalink:
entity_id = await knowledge_client.resolve_entity(result.permalink)
results = title_results.get("results", []) if isinstance(title_results, dict) else []
if results:
result = results[0]
permalink = result.get("permalink")
if permalink:
entity_id = await knowledge_client.resolve_entity(permalink)

if entity_id is None:
raise ValueError(f"Could not find note matching: {identifier}")
Expand Down Expand Up @@ -635,10 +638,13 @@ def build_context(
page=page,
page_size=page_size,
max_related=max_related,
output_format="text" if format == "text" else "json",
)
)
# build_context now returns a slimmed dict (already serializable)
print(json.dumps(result, indent=2, ensure_ascii=True, default=str))
if format == "json":
print(json.dumps(result, indent=2, ensure_ascii=True, default=str))
else:
print(result)
except ValueError as e:
typer.echo(f"Error: {e}", err=True)
raise typer.Exit(1)
Expand Down
19 changes: 10 additions & 9 deletions src/basic_memory/mcp/tools/build_context.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Build context tool for Basic Memory MCP server."""

from typing import Optional
from typing import Optional, Literal

from loguru import logger
from fastmcp import Context
Expand Down Expand Up @@ -190,7 +190,7 @@ def _format_context_markdown(graph: GraphContext, project: str) -> str:

Format options:
- "json" (default): Slimmed JSON with redundant fields removed
- "markdown": Compact markdown text for LLM consumption
- "text": Compact markdown text for LLM consumption
""",
)
async def build_context(
Expand All @@ -202,7 +202,7 @@ async def build_context(
page: int = 1,
page_size: int = 10,
max_related: int = 10,
format: str = "json",
output_format: Literal["json", "text"] = "json",
context: Context | None = None,
) -> dict | str:
"""Get context needed to continue a discussion within a specific project.
Expand All @@ -225,12 +225,13 @@ async def build_context(
page: Page number of results to return (default: 1)
page_size: Number of results to return per page (default: 10)
max_related: Maximum number of related results to return (default: 10)
format: Response format - "json" for slimmed JSON dict, "markdown" for compact text
output_format: Response format - "json" for slimmed JSON dict,
"text" for compact markdown text
context: Optional FastMCP context for performance caching.

Returns:
dict (format="json"): Slimmed JSON with redundant fields removed
str (format="markdown"): Compact markdown representation
dict (output_format="json"): Slimmed JSON with redundant fields removed
str (output_format="text"): Compact markdown representation

Examples:
# Continue a specific discussion
Expand All @@ -239,8 +240,8 @@ async def build_context(
# Get deeper context about a component
build_context("work-docs", "memory://components/memory-service", depth=2)

# Get markdown output for compact context
build_context("research", "memory://specs/search", format="markdown")
# Get text output for compact context
build_context("research", "memory://specs/search", output_format="text")

Raises:
ToolError: If project doesn't exist or depth parameter is invalid
Expand Down Expand Up @@ -276,7 +277,7 @@ async def build_context(
max_related=max_related,
)

if format == "markdown":
if output_format == "text":
return _format_context_markdown(graph, active_project.name)

return _slim_context(graph)
37 changes: 29 additions & 8 deletions src/basic_memory/mcp/tools/chatgpt_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,22 +13,41 @@
from basic_memory.mcp.server import mcp
from basic_memory.mcp.tools.search import search_notes
from basic_memory.mcp.tools.read_note import read_note
from basic_memory.schemas.search import SearchResponse
from basic_memory.config import ConfigManager
from basic_memory.schemas.search import SearchResponse, SearchResult


def _format_search_results_for_chatgpt(results: SearchResponse) -> List[Dict[str, Any]]:
def _format_search_results_for_chatgpt(
results: SearchResponse | list[SearchResult] | list[dict[str, Any]] | dict[str, Any],
) -> List[Dict[str, Any]]:
"""Format search results according to ChatGPT's expected schema.

Returns a list of result objects with id, title, and url fields.
"""
if isinstance(results, SearchResponse):
raw_results: list[SearchResult] | list[dict[str, Any]] = results.results
elif isinstance(results, dict):
nested_results = results.get("results")
raw_results = nested_results if isinstance(nested_results, list) else []
else:
raw_results = results

formatted_results = []

for result in results.results:
for result in raw_results:
if isinstance(result, SearchResult):
title = result.title
permalink = result.permalink
elif isinstance(result, dict):
title = result.get("title")
permalink = result.get("permalink")
else:
raise TypeError(f"Unexpected result type: {type(result).__name__}")

formatted_result = {
"id": result.permalink or f"doc-{len(formatted_results)}",
"title": result.title if result.title and result.title.strip() else "Untitled",
"url": result.permalink or "",
"id": permalink or f"doc-{len(formatted_results)}",
"title": title if isinstance(title, str) and title.strip() else "Untitled",
"url": permalink or "",
}
formatted_results.append(formatted_result)

Expand Down Expand Up @@ -102,6 +121,7 @@ async def search(
page=1,
page_size=10, # Reasonable default for ChatGPT consumption
search_type="text", # Default to full-text search
output_format="json",
context=context,
)

Expand All @@ -115,10 +135,11 @@ async def search(
}
else:
# Format successful results for ChatGPT
formatted_results = _format_search_results_for_chatgpt(results)
raw_results = results.get("results", []) if isinstance(results, dict) else []
formatted_results = _format_search_results_for_chatgpt(raw_results)
search_results = {
"results": formatted_results,
"total_count": len(results.results), # Use actual count from results
"total_count": len(raw_results), # Use actual count from results
"query": query,
}
logger.info(f"Search completed: {len(formatted_results)} results returned")
Expand Down
Loading
Loading