A powerful CLI tool for indexing, searching, and having AI-powered conversations about your projects.
Developed by okik.ai.
Contributions are welcome! Feel free to submit issues and pull requests to help improve Adist.
The repository is hosted at github.com/okikorg/adist.
β οΈ IMPORTANT: This is an active development project. Breaking changes may occur between versions as we continue to improve the tool. Please check the changelog when updating.
- π Fast document indexing and semantic searching
- π Support for multiple projects
- π― Project-specific search
- π§© Block-based indexing for more precise document analysis
- π€ LLM-powered document summarization using Anthropic's Claude or local Ollama models
- π£οΈ Interactive chat with AI about your codebase
- π Project statistics and file analysis
- π Easy project switching and reindexing
- β‘ Real-time streaming responses for chat and queries
npm install -g adistadist init <project-name>This will:
- Create a new project configuration
- Index all supported files in the current directory
- Optionally generate LLM summaries if you have the ANTHROPIC_API_KEY set
adist get "<query>"Search for documents in the current project using natural language queries.
adist query "<question>"Ask questions about your project and get AI-powered answers. The AI analyzes relevant documents from your codebase to provide contextual answers with proper code highlighting.
For real-time streaming responses (note that code highlighting may be limited):
adist query "<question>" --streamadist chatStart an interactive chat session with AI about your project. This mode provides:
- Persistent conversation history within the session
- Context awareness across multiple questions
- Code syntax highlighting for better readability
- Automatic retrieval of relevant documents for each query
By default, chat mode displays a loading spinner while generating responses. For real-time streaming responses, use:
adist chat --streamNote that code highlighting may be limited in streaming mode.
Type /exit to end the chat session.
adist switch <project-name>Switch to a different project for searching.
adist listView all configured projects.
adist reindexReindex the current project. Use --summarize to generate LLM summaries:
adist reindex --summarizeThis will:
- Show project statistics (total files, size, word count)
- Ask for confirmation before proceeding with summarization
- Generate summaries for each file
- Create an overall project summary
adist summaryView the overall project summary. To view a specific file's summary:
adist summary --file <filename>adist llm-configConfigure which LLM provider to use:
- Anthropic Claude (cloud-based, requires API key)
- Claude 3 Opus
- Claude 3 Sonnet
- Claude 3 Haiku
- OpenAI (cloud-based, requires API key)
- GPT-4o
- GPT-4 Turbo
- GPT-3.5 Turbo
- Ollama (run locally, no API key needed)
- Choose from any locally installed models
When using Ollama, you can select from your locally installed models and customize the API URL if needed.
The tool supports several LLM-powered features using Anthropic's Claude models, OpenAI's GPT models, or Ollama models (local):
Generate summaries of your project files to help understand large codebases quickly.
Get specific answers about your codebase without having to manually search through files.
Have a natural conversation about your project, with the AI maintaining context between questions.
AI interactions can be used in two modes:
- Default mode: Shows a loading spinner while generating responses with full code highlighting
- Streaming mode: Shows real-time responses as they're being generated (use
--streamflag)
# Default mode with loading spinner and code highlighting
adist query "How does authentication work?"
# Streaming mode with real-time responses
adist query "How does authentication work?" --streamYou have three options for using LLM features:
-
Set your Anthropic API key in the environment:
export ANTHROPIC_API_KEY='your-api-key-here'
-
Configure to use Anthropic and select your preferred model:
adist llm-config
-
Set your OpenAI API key in the environment:
export OPENAI_API_KEY='your-api-key-here'
-
Configure to use OpenAI and select your preferred model:
adist llm-config
-
Install Ollama from ollama.com/download
-
Run Ollama and pull a model (e.g., llama3):
ollama pull llama3
-
Configure adist to use Ollama:
adist llm-config
-
Select Ollama and choose your preferred model from the list.
After setting up your preferred LLM provider:
-
Initialize your project:
adist init <project-name>
-
Start interacting with your codebase:
adist query "How does the authentication system work?" # or adist chat
The tool indexes a wide range of file types including:
- Markdown (.md)
- Text (.txt)
- Code files (.js, .ts, .py, .go, etc.)
- Documentation (.rst, .asciidoc)
- Configuration files (.json, .yaml, .toml)
- And many more
The tool stores its configuration in:
- macOS:
~/Library/Application Support/adist - Linux:
~/.config/adist - Windows:
%APPDATA%\adist
- Improved chat and query commands with better code highlighting in non-streaming mode (default)
- Added
--streamflag to chat and query commands for real-time streaming responses - Added support for OpenAI models (GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo)
- Added support for all Claude 3 models (Opus, Sonnet, Haiku)
- Added block-based indexing as the default method for faster and more precise document analysis
- Made block-based search the default search method for better contextual understanding
- Legacy indexing and search methods are still available under
legacy-reindexandlegacy-get - Added support for Ollama to run LLM features locally without an API key
- Added LLM provider configuration command for easy switching between Anthropic, OpenAI, and Ollama
- Enhanced document relevance ranking for more accurate results
- Added automatic related document discovery for richer context
- Optimized token usage to reduce API costs
The latest version of adist uses block-based indexing by default, which:
- Splits documents into semantic blocks (functions, sections, paragraphs)
- Indexes each block individually with its metadata
- Allows for more precise searching and better context understanding
- Improves AI interactions by providing more relevant code snippets
The previous full-document indexing method is still available as legacy-reindex and legacy-get commands.
MIT