An interactive knowledge visualization platform that transforms your text and images into an interactive knowledge graph using AI embeddings. By converting your content into high-dimensional vectors and projecting them onto a 2D space, Maxims reveals conceptual relationships that might not be immediately obvious.
- 📝 Multi-modal Content: Import and visualize both text documents and images in a unified space
- 🧠 AI Embeddings: Use advanced encoders (Cohere, VoyageAI, or local models) to understand content meaning
- 📊 Dynamic Visualization: Interactive graph with UMAP, t-SNE, and
PCAdimensionality reduction - 🏷️ Smart Organization: Tag-based filtering and real-time positioning updates
- 🔓 Open Source: Full source code available on GitHub. Contribute, modify, or fork as needed
- 💻 Local-First: Choose to run local or cloud-based models. Control your data
- Add Content: Add content (text or images) using the command panel (Cmd/Ctrl+K)
- AI Processing: AI generates embeddings that capture semantic meaning
- Visualization: Content appears as interactive points positioned by similarity
- Exploration: Explore relationships, filter by tags, and discover insights
- Frontend: React 19 with TypeScript, Tailwind CSS, and Headless UI
- Backend: Next.js 15 with App Router and API routes
- Database: Dexie.js (IndexedDB) for local data storage
- State Management: Jotai for atomic state management
- Build Tool: Bun for fast development and building
- Content Parsing: Text and images are parsed through dedicated parsers in
src/data/parsers/ - Embedding Generation: Content is encoded using selected AI encoder (see Encoders section)
- Dimensionality Reduction: High-dimensional embeddings are projected to 2D using UMAP, t-SNE, or
PCA - Visualization: Points are rendered on an interactive canvas with D3.js
- Real-time Updates: Position updates and filtering happen in real-time
interface Point<T extends PointType> {
id: string;
pos: Record<ProjectionType, XY>; // Positions for each projection type
data: PointData; // Text or image content
meta: PointMetadata; // Tags, timestamps, etc.
embedding: number[]; // High-dimensional vector
}App.tsx: Main application layout and routingcomponents/Graph.tsx: Interactive visualization canvascomponents/CommandPanel.tsx: Command palette for adding contentcomponents/Config.tsx: Configuration panel for settingscomponents/Sidebar.tsx: Tag filtering and navigationcomponents/TagPanel.tsx: Tag management interfacedata/encoders/: AI encoder implementationsdata/operations.ts: Core data operations and transformationsdata/parsers/: Content parsing and rendering components
Maxims supports different types of content through a parser system:
- Text Points: Store text content with optional notes
- Image Points: Store images (base64 encoded) with optional notes
Each point type has a corresponding parser component in src/data/parsers/ that handles:
- Content rendering in different contexts (search results, detailed view, etc.)
- User interaction (editing notes, etc.)
- Proper styling and layout
The parser system is extensible - new point types can be added by creating new parser components and registering them in src/data/parsers/parsers.ts.
- Bun (recommended) or Node.js 18+
- Modern web browser with IndexedDB support
# Clone the repository
git clone https://github.com/wyatt/maxims.git
cd maxims
# Install dependencies
bun install
# Start development server
bun run devThe application will be available at http://localhost:3000.
Maxims supports multiple AI encoders. You'll need to configure API keys for cloud-based encoders:
- Open the configuration panel, using the command panel to open it (
Cmd/Ctrl+K) - Select your preferred encoder from the dropdown
- Enter your API keys in the corresponding sections
- You'll need to re-embed all your content to see the changes. This should happen automatically, but if it doesn't, you can use the command panel to re-embed all your content (
Cmd/Ctrl+K). You might need to update the positions of your points after re-embedding too.
| Encoder | Type | Dimensions | Pros | Cons |
|---|---|---|---|---|
| VoyageAI Multimodal | Cloud | 1024 | Strong multimodal performance, but images and text tend to separate | Requires API key, usage costs |
| Cohere | Cloud | 1536 | ⭐️ The best multimodal model. High quality text embeddings, good image support | Requires API key, usage costs |
| Local Nomic | Local | 768 | No API key required, privacy-focused | Slower performance, first run requires downloading model, multimodal capability is weaker |
| Demo Cohere | Demo | 1536 | No setup required, good for testing | Limited to 50 encodings |
- Visit VoyageAI Dashboard
- Sign up for an account
- Navigate to API Keys section
- Create a new API key
- Recommended model:
voyage-multimodal-3(multimodal model)
- Visit Cohere Dashboard
- Sign up for an account
- Navigate to API Keys section
- Create a new API key
- Recommended model:
embed-v4.0(multimodal model)
Important: Make sure to select a multimodal model that supports both text and images.
- Encoder Settings: Choose your preferred AI encoder
- Image Settings: Configure image processing limits
- Projection Type: Select dimensionality reduction algorithm (UMAP, t-SNE, PCA)
- Algorithm Parameters: Fine-tune projection algorithms
- Export Settings: Configure data export options
Maxims supports three methods for projecting high-dimensional embeddings to 2D:
- UMAP: Fast, preserves both local and global structure, good for large datasets
- t-SNE: Excellent for preserving local structure and clusters, slower than UMAP
- PCA:
Linear projection, fastest but may lose important structure
Note: PCA is not implemented and will place all points at the same position. If you want to contribute, this is a high priority feature.
- Command Panel: Press
Cmd/Ctrl+Kto open the command panel - Text: Type or paste text content
- Images: Paste images from clipboard (Ctrl/Cmd+V)
- Bulk Import: Use ZIP files for multiple documents
- Zoom & Pan: Navigate the visualization
- Point Selection: Click points to view details
- Tag Filtering: Use the sidebar to filter by tags
- Search: Use the command panel to search content
- Export: Download your data as ZIP files
Cmd/Ctrl+K: Open command panelT: Open tag panelEscape: Close dialogs and panels
src/
├── app/ # Next.js App Router
├── components/ # React components
├── data/ # Data layer
│ ├── encoders/ # AI encoder implementations
│ ├── parsers/ # Content parsers
│ └── operations.ts # Core operations
├── utils/ # Utility functions
└── types.ts # TypeScript definitions
To add a new AI encoder, create a new file in src/data/encoders/:
import { NewPointStub, Point } from "@/utils/types";
import { EncoderEntry, guardRequirements } from "./encoders";
// Types
type AllowedTypes = (typeof ALLOWED_TYPES)[number];
type RequiredConfigKeys = (typeof REQUIRED_CONFIG_KEYS)[number];
const encode = async (
points: (Point<AllowedTypes> | NewPointStub<AllowedTypes>)[],
config: Record<
RequiredConfigKeys,
Exclude<Config[RequiredConfigKeys], undefined>
>,
isQuery?: boolean
) => {
// Your encoding logic here
// Return array of embeddings (number[][])
};
const REQUIRED_CONFIG_KEYS = ["your-api-key"] as const;
const ALLOWED_TYPES = ["text", "image"] as const;
const ENTRY: EncoderEntry<AllowedTypes, "your-encoder"> = {
slug: "your-encoder" as const,
name: "Your Encoder Name",
description: "Description of your encoder",
VECTOR_DIMENSIONS: 1024, // Your encoder's output dimensions
REQUIRED_CONFIG_KEYS,
ALLOWED_TYPES,
encode: (points, isQuery) =>
guardRequirements<RequiredConfigKeys, AllowedTypes>(
points,
{ ...ENTRY, encode },
isQuery
),
};
export default ENTRY;Then register it in src/data/encoders/encoders.ts:
import yourEncoder from "./your-encoder";
export const encoders = {
// ... existing encoders
[yourEncoder.slug]: yourEncoder,
};# Start development server
bun run dev
# Build for production
bun run build
# Start production server
bun run start
# Run tests
bun testWe welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature - Make your changes: Follow the existing code style and patterns
- Test your changes: Ensure everything works as expected
- Submit a pull request: Include a clear description of your changes
- Code Style: Follow existing TypeScript and React patterns
- Testing: Add tests for new features when possible
- Documentation: Update README and code comments as needed
- Encoders: When adding new encoders, include proper error handling and type safety
- Performance: Consider bundle size and runtime performance
- New Encoders: Add support for additional AI embedding models
- Visualization: Improve the interactive graph features
- Performance: Optimize rendering and data processing
- UI/UX: Enhance the user interface and experience
- Documentation: Improve guides and examples
This project is licensed under the MIT License - see the LICENSE file for details.
Created by @wyatt