An LLM-powered agent assistant for Datasette
Install this plugin in the same environment as Datasette.
datasette install datasette-agentVisit /-/agent to start a conversation with the chat assistant.
The agent uses datasette-llm to call language models. Configure a default model for it before visiting /-/agent, for example in datasette.yml:
plugins:
datasette-llm:
default_model: gpt-5.4-miniThe "Explore with AI agent" entries that appear in the database and table action menus launch a background agent that explores the selected database or table and writes a report. Reports live under /-/agent/explore/.
Visit /-/agent/background to launch background agents directly. Each one is given a goal and runs toward it without further input. The listing includes a Stop button for cancelling agents that are still running.
This plugin registers two independent permissions:
datasette-agent— required to use the chat assistant under/-/agent.datasette-agent-explore— required to see the "Explore with AI agent" entries in the database/table action menus and to use the explorer routes under/-/agent/explore/.
The two permissions are independent: an actor may hold one without the other. The --root user has both.
Other Datasette plugins can register additional tools for the agent using the register_agent_tools plugin hook.
Create a Datasette plugin that implements the register_agent_tools hook, returning a list of AgentTool instances:
from datasette import hookimpl
from datasette_agent.tools import AgentTool
@hookimpl
def register_agent_tools(datasette):
return [
AgentTool(
name="my_tool",
description="Description of what this tool does, used by the LLM to decide when to call it.",
input_schema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to run",
},
"style": {
"type": "string",
"enum": ["brief", "detailed"],
"description": "Output style",
},
},
"required": ["query"],
},
fn=my_tool_handler,
),
]Each tool's fn must be an async function that accepts datasette and actor as keyword arguments, plus any parameters defined in input_schema. It must return a JSON string:
import json
async def my_tool_handler(datasette, actor, query, style=None):
# Do work here...
return json.dumps({
"result": "Tool output that the LLM will see",
})To render rich HTML inline in the chat UI, include an _html key in the returned JSON. Any top-level key whose name starts with _ is removed before the tool result is sent to the LLM, so the HTML is shown to the user but not passed back to the model:
return json.dumps({
"_html": '<div class="my-widget">Rich content here</div>',
"summary": "Widget rendered successfully",
})Tool plugins can render rich HTML inline in the chat UI by returning a JSON object with an _html key. The HTML is rendered directly in the conversation. The remaining keys are returned to the LLM as the tool result, with any key whose name starts with _ removed first.
Example tool implementation:
import json
async def _render_widget(datasette, actor, database, sql):
html = (
'<script src="/-/static-plugins/my-plugin/widget.js" type="module"></script>\n'
'<my-widget>\n'
f'<script type="application/json">{json.dumps({"database": database, "sql": sql})}</script>\n'
'</my-widget>'
)
return json.dumps({
"_html": html,
"database": database,
"sql": sql,
"summary": "Widget rendered successfully",
})The _html value is inserted into the chat as raw HTML, so it can include custom elements, scripts, and styles. The other keys (database, sql, and summary in this example) are what the LLM receives as the tool result.
- datasette-agent-charts - renders charts from SQL query results using Observable Plot
- datasette-agent-openai-imagegen - generates images using OpenAI's image generation API
Start an interactive chat session with the agent from the command line:
datasette agent chat mydata.dbYou can pass multiple database files, use :memory: for an in-memory database, specify a model, or send a single prompt:
datasette agent chat mydata.db -m gpt-5.4-mini
datasette agent chat mydata.db -m gpt-5.4-mini -p "List all tables"Options:
-p,--prompt— Send a single prompt and exit (non-interactive mode)-m,--model— LLM model to use
To see all registered agent tools, grouped by plugin:
datasette agent toolsOutput:
agent:
list_databases_and_tables
List all available databases and their tables
describe_table
Get column names, types, and foreign keys for a table
sql_query
Execute a read-only SQL query against a database
Add --json for machine-readable output:
datasette agent tools --jsonTo set up this plugin locally, first checkout the code. Run the tests like this:
cd datasette-agent
uv run pytestTo run the development server with a persistent internal database and GPT-5.5 as the model:
uv run datasette --internal internal.db \
--root --secret 1 \
-s plugins.datasette-llm.default_model gpt-5.5Add extra database files to that command to enable the agent to query them.
This plugin vendors streaming-markdown by Damian Tarnawski, MIT licensed.