Build internal apps using AI.
Securely connect your database, build an app, and deploy in seconds.
π Jump to Quick Start - Get up and running in minutes!
- Securely connect your database (or use a Sample database)
- Build internal apps that can communicate with your database
- AI builds the whole full-stack app and auto-fixes any issues
- Preview your built app live and make edits
- Download the built app code or connect directly to GitHub
- Deploy your built app
- Sales Dashboard
- Generate a live dashboard from your CRM database to track leads, conversions, and rep performance.
- Finance Tracker
- Build a financial report viewer that pulls expense and revenue data from your finance DB.
- Inventory Management
- Build a tool to view, update, and restock inventory directly from your product database.
- Customer Support Tool
- Create an internal app to search, view, and manage customer tickets pulled from your support database.
- Admin Portal
- Create a secure interface for non-technical staff to input and edit structured data in your DB.
Your data stays yours. We've designed liblab.ai with security and privacy as core principles.
When you connect your database to liblab.ai, here's exactly what happens:
- π Local Connection: Your database credentials are stored locally on your machine and never sent to external servers
- π§ App Generation: When you build an app, it runs in a secure web container that displays live dashboards with your data
- π Secure Tunneling: Since the web container can't directly access your local database, we use ngrok to create a secure tunnel
- π End-to-End Encryption: Every database request, response, query, and data output is encrypted using AES-256 encryption
- π Locally Generated: A unique encryption key is generated on your machine during setup
- π Never Shared: This key stays on your local machine and is never transmitted anywhere
- π― User-Specific: Each user gets their own unique encryption key
- πΎ Secure Storage: Keys are stored securely in your local environment
Your Database β Encrypted Request β Secure Tunnel β Web Container
β β
Local Machine β Encrypted Response β Secure Tunnel β Preview Dashboard
What this means for you:
- β Database credentials never leave your machine
- β All data transmission is encrypted with your unique key
- β Even if network traffic is intercepted, data remains unreadable
- β No data is stored on external servers
- β You maintain complete control over your data access
Before starting, ensure you have all the following installed and configured:
Node.js (18 or higher) - Required for running the application
Best for: Simple setup, single Node.js version, macOS users
# Install Homebrew if you don't have it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Node.js
brew install node
Best for: Developers who work with multiple projects requiring switching different Node.js versions
# Install Homebrew if you don't have it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install NVM
brew install nvm
# To make the nvm command available, restart your terminal or run:
source ~/.zshrc # or source ~/.bashrc
# Install latest stable Node.js
nvm install --lts
node --version # Should show v18.x.x or higher
npm --version # Should show version number
pnpm - Package manager (faster than npm)
# Install pnpm globally
npm install -g pnpm
# Verify installation
pnpm --version
ngrok - Free account for local tunneling (one time setup)
- Go to ngrok.com/signup
- Create a free account
- Follow ngrok's setup instructions
- Install ngrok CLI tool with:
brew install ngrok
- Set your ngrok authtoken with:
ngrok config add-authtoken YOUR_AUTHTOKEN_HERE
- Verify the installation with
ngrok version
- Install ngrok CLI tool with:
Anthropic API Key - Required for AI model access
- Go to console.anthropic.com/signup
- Create an account
- Verify your email
- Go to console.anthropic.com/settings/keys
- Click "Create Key"
- Give it a name (e.g., "liblab-ai")
- Copy the API key (starts with
sk-ant-
)
You'll add this to your .env
file during setup, but keep it handy:
ANTHROPIC_API_KEY=sk-ant-your-api-key-here
π‘ Pro Tip: The setup script will prompt you for this API key, so you don't need to manually edit files.
Run the setup:
pnpm run setup
If you lack permissions to run scripts/setup.sh
fix it with:
chmod +x scripts/setup.sh
That's it! π
The script automatically handles:
- Setup ngrok tunnel (macOS/Linux)
- Configure
.env
file - Install all dependencies
- Setup SQLite database
Start the development server with:
pnpm run dev
π‘ Recommended Providers
For optimal performance with liblab.ai, we recommend:
- Anthropic Claude-3 Sonnet (Default) - Best overall performance with excellent code understanding and large context handling
- Google Gemini Pro - Strong alternative with robust code generation capabilities
These providers consistently deliver the best results for our use cases, handling large system prompts, code modifications, and complex app generation tasks.
By default, liblab.ai uses Anthropic's Claude (claude-3-5-sonnet-latest). Configure your preferred provider:
DEFAULT_LLM_PROVIDER=<provider_name> # Default: 'Anthropic'
DEFAULT_LLM_MODEL=<model_name> # Default: 'claude-3-5-sonnet-latest'
Provider | API Key Variable | Get API Key |
---|---|---|
Anthropic | ANTHROPIC_API_KEY |
Console |
GOOGLE_GENERATIVE_AI_API_KEY |
Console | |
OpenAI | OPENAI_API_KEY |
Console |
Groq | GROQ_API_KEY |
Console |
HuggingFace | HuggingFace_API_KEY |
Console |
Mistral | MISTRAL_API_KEY |
Console |
Cohere | COHERE_API_KEY |
Console |
xAI | XAI_API_KEY |
Docs |
Perplexity | PERPLEXITY_API_KEY |
Settings |
DeepSeek | DEEPSEEK_API_KEY |
Contact DeepSeek |
OpenRouter | OPEN_ROUTER_API_KEY |
Settings |
Together | TOGETHER_API_KEY |
Console |
Amazon Bedrock | AWS_BEDROCK_CONFIG |
AWS Console |
GitHub | GITHUB_API_KEY |
Settings |
Local Models
# Ollama - Local open-source models
OLLAMA_API_BASE_URL=
# LMStudio - Local model runner with GUI
LMSTUDIO_API_BASE_URL=
# OpenAI-compatible services
OPENAI_LIKE_API_BASE_URL=
OPENAI_LIKE_API_KEY=
liblab.ai supports multiple starter templates for generating apps. You can control which starter is used by setting the STARTER
environment variable in your .env
file or at runtime.
Add the following to your .env
file (or set as an environment variable):
# Name of the starter project
STARTER=
- Supported values:
next
,remix
- If not set, the default is
next
.
Each starter lives in its own directory under starters/
. For example, the default Next.js starter is in starters/next-starter/
.
Each starter must include a .liblab
directory with the following files:
prompt
: The main system prompt and instructions for code generationtechnologies
: List of technologies used by the starter (one per line)examples
: Example user prompts and responses for this starterignore
: Patterns for files/folders to exclude from importing into the builder.
These files are dynamically imported and used in the getAppsPrompt
logic to generate apps and instructions.
- Create a new directory under
starters/
named<your-starter>-starter
(e.g.,my-starter-starter
). - Add a
.liblab
directory inside your starter with the files that will improve the quality of generated code:prompt
,technologies
,examples
andignore
. - Update the plugin types in
app/lib/plugins/types.ts
to include your new starter inStarterPluginId
andPluginAccessMap
. - Set
STARTER=<your-starter>
in your.env
file to use your new starter.
Tip: See the default Next.js starter prompt for a comprehensive example of how to structure your starter's instructions.
The @shared
directory contains reusable code that can be used across different parts of the application. When adding shared code:
- Place your code in the appropriate subdirectory under
shared/src/
- Keep shared code independent of the main project's dependencies
- Use TypeScript for type safety and better maintainability
- Include proper documentation and type definitions
- Write unit tests for shared functionality
Example structure:
shared/
src/
types/ # TypeScript type definitions
utils/ # Utility functions
constants/ # Shared constants
data-access/ # Database accessors
Data accessors provide a standardized way to interact with different database types. To add a new data accessor:
-
Create a new file in
shared/src/data-access/accessors/
(e.g.,mysql.ts
) -
Implement the
BaseAccessor
interface by creating a class that implements it:import type { BaseAccessor } from '../baseAccessor'; import type { MySqlColumn, MySqlTable } from '../../types'; import type { Connection } from 'mysql2/promise'; import mysql from 'mysql2/promise'; // Configure type casting for numeric values const typesToParse = ['INT', 'BIGINT', 'DECIMAL', 'NUMERIC', 'FLOAT', 'DOUBLE', 'NEWDECIMAL']; export class MySQLAccessor implements BaseAccessor { readonly label = 'MySQL'; private _connection: Connection | null = null; static isAccessor(databaseUrl: string): boolean { return databaseUrl.startsWith('mysql://'); } async testConnection(databaseUrl: string): Promise<boolean> { try { const connection = await mysql.createConnection(databaseUrl); await connection.query('SELECT 1'); await connection.end(); return true; } catch (error: any) { return false; } } async executeQuery(query: string, params?: string[]): Promise<any[]> { if (!this._connection) { throw new Error('Database connection not initialized. Please call initialize() first.'); } try { const [rows] = await this._connection.query(query, params); return rows as any[]; } catch (error) { console.error('Error executing query:', error); throw new Error((error as Error)?.message); } } guardAgainstMaliciousQuery(query: string): void { if (!query) { throw new Error('No SQL query provided. Please provide a valid SQL query to execute.'); } const normalizedQuery = query.trim().toUpperCase(); if (!normalizedQuery.startsWith('SELECT') && !normalizedQuery.startsWith('WITH')) { throw new Error('SQL query must start with SELECT or WITH'); } const forbiddenKeywords = [ 'INSERT ', 'UPDATE ', 'DELETE ', 'DROP ', 'TRUNCATE ', 'ALTER ', 'CREATE ', 'GRANT ', 'REVOKE ', ]; if (forbiddenKeywords.some((keyword) => normalizedQuery.includes(keyword))) { throw new Error('SQL query contains forbidden keywords'); } } async getSchema(): Promise<MySqlTable[]> { if (!this._connection) { throw new Error('Database connection not initialized. Please call initialize() first.'); } // Query to get all tables with their comments const tablesQuery = ` SELECT TABLE_NAME, TABLE_COMMENT FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = DATABASE() AND TABLE_TYPE = 'BASE TABLE' ORDER BY TABLE_NAME; `; // Query to get all columns with their details const columnsQuery = ` SELECT c.TABLE_NAME, c.COLUMN_NAME, c.DATA_TYPE, c.COLUMN_TYPE, c.IS_NULLABLE, c.COLUMN_DEFAULT, c.COLUMN_COMMENT, c.COLUMN_KEY, c.EXTRA FROM INFORMATION_SCHEMA.COLUMNS c WHERE c.TABLE_SCHEMA = DATABASE() ORDER BY c.TABLE_NAME, c.ORDINAL_POSITION; `; try { // Execute both queries const [tablesResult] = await this._connection.execute(tablesQuery); const [columnsResult] = await this._connection.execute(columnsQuery); const tables = tablesResult as any[]; const columns = columnsResult as any[]; // Group columns by table const columnsByTable = new Map<string, any[]>(); columns.forEach((column) => { if (!columnsByTable.has(column.TABLE_NAME)) { columnsByTable.set(column.TABLE_NAME, []); } columnsByTable.get(column.TABLE_NAME)!.push(column); }); // Build the result const result: MySqlTable[] = tables.map((table) => ({ tableName: table.TABLE_NAME, tableComment: table.TABLE_COMMENT || '', columns: (columnsByTable.get(table.TABLE_NAME) || []).map((col) => { const column: MySqlColumn = { name: col.COLUMN_NAME, type: col.DATA_TYPE, fullType: col.COLUMN_TYPE, nullable: col.IS_NULLABLE, defaultValue: col.COLUMN_DEFAULT, comment: col.COLUMN_COMMENT || '', isPrimary: col.COLUMN_KEY === 'PRI', extra: col.EXTRA || '', }; // Extract enum values if the column type is ENUM if (col.DATA_TYPE === 'enum') { const enumMatch = col.COLUMN_TYPE.match(/enum\((.+)\)/i); if (enumMatch) { const enumString = enumMatch[1]; const enumValues = enumString.split(',').map((val: string) => val.trim().replace(/^'|'$/g, '')); column.enumValues = enumValues; } } return column; }), })); return result; } catch (error) { console.error('Error fetching database schema:', error); throw error; } } async initialize(databaseUrl: string): Promise<void> { if (this._connection) { await this.close(); } this._connection = await mysql.createConnection({ uri: databaseUrl, typeCast: (field, next) => { if (typesToParse.includes(field.type)) { const value = field.string(); return value !== null ? parseFloat(value) : null; } return next(); }, }); } async close(): Promise<void> { if (this._connection) { await this._connection.end(); this._connection = null; } } }
-
Register your accessor in
dataAccessor.ts
:import type { BaseAccessor, BaseAccessorConstructor } from './baseAccessor'; import { PostgresAccessor } from './accessors/postgres'; import { MySQLAccessor } from './accessors/mysql'; export class DataAccessor { static getAccessor(databaseUrl: string): BaseAccessor { const allAccessors: BaseAccessorConstructor[] = [PostgresAccessor, MySQLAccessor]; const AccessorClass = allAccessors.find((Acc) => Acc.isAccessor(databaseUrl)); if (!AccessorClass) { throw new Error(`No accessor found for database URL: ${databaseUrl}`); } return new AccessorClass(); } }
Each accessor should:
- Handle database-specific connection logic
- Implement security measures (e.g., query validation)
- Provide schema information
- Support parameterized queries
- Handle errors appropriately
- Implement connection testing
- Properly manage database connections (initialize/close)
- Support type casting for numeric values
- Handle enum types and their values
- Include table and column comments in schema information
The accessor interface ensures consistent behavior across different database types while allowing for database-specific optimizations and features.
You can deploy your generated apps directly to Netlify. To enable this:
- Create a Netlify account
- Generate an auth token from User Settings > Applications > New access token
- Add the token to your
.env
file:NETLIFY_AUTH_TOKEN=your-token-here
Once configured, you can deploy any app you generate through liblab.ai to Netlify using the deploy option in the UI.
- Contributing Guidelines - How to contribute to the project
- Governance Model - Our decision-making process and community structure
- Code of Conduct - Community standards and expectations
We welcome contributions! Here's how to get started:
- π Read our Contributing Guidelines - Complete setup and development guide
- π Browse Issues - Find something to work on
- ποΈ Check our Governance Model - Understand how we work
New to the project? Look for good first issue
labels.
liblab.ai follows a Modified Open Governance model that balances community input with efficient decision-making.
Read our complete Governance Model for details on decision-making processes, roles, and how to become a Core Maintainer.
- π GitHub Issues - Report bugs, request features, or discuss project-related topics
- π§ General Inquiries - Contact us directly for questions or concerns
- π Roadmap - View upcoming features
- π Version: 0.0.1 (Early Development)
- π Status: Active development with regular releases
MIT License - see the LICENSE file for details.
Copyright (c) 2025 Liblab, Inc. and liblab.ai contributors
Ready to contribute? Check out our Contributing Guidelines and join our community! π