Skip to content

Fractera/ai-workspace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

176 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fractera

Fractera AI Workspace: AI-powered business development built for product managers and entrepreneurs who want to be as effective as a senior developer — without depending on one.

License: MIT Self-Hosted AI Models PRs Welcome

Anthropic: Claude Code  ·  OpenAI: Codex  ·  Google: Gemini CLI  ·  Alibaba: Qwen Code  ·  Moonshot: Kimi Code  ·  OpenRouter: 300+ models

Fractera AI Workspace

Star on GitHub    Fork on GitHub


Why Fractera?

Modern AI coding platforms produce impressive results — often professional-looking ones. But you still need to understand code deeply to build a genuinely effective online business. If you don't know in detail how authentication connects to a database, how a database connects to S3 storage, if Redis is just a word to you, and headless hosting sounds like a gift — you still need a professional developer. You still can't tell an AI "take this and do everything right." That's not how it works. Many people find out the hard way when a bill that started at zero unexpectedly grows into thousands of dollars.

Fractera is an architecture that manages the AI coding process and enforces correct use of server infrastructure — for maximum performance, minimum cost, and absolute search optimization. It's exactly what a product manager or entrepreneur needs to produce first-class code on par with a senior developer, or better.


Core Features

Fractera ships with a built-in database and local S3-compatible object storage. Both are self-hosted services that require no cloud account and no monthly fee.

Fractera ships with a built-in database and local S3-compatible object storage. Both are self-hosted services that require no cloud account and no monthly fee.

Parallel Interactive Terminals. Run multiple AI sessions simultaneously. Switch between platforms without losing context.

Built-in Authentication. Email/password auth, guest mode, and role-based access control. The first registered user becomes the Architect (Admin).

Absolute Data Portability. Export and import your entire database and file storage in a single operation.

Integrated Database and Media Storage. Built-in SQLite browser and local S3-compatible media library — store images, videos, and documents without external services.

Media Library. Upload, crop, preview, and manage images and videos. Generate a full favicon and PWA icon set from a single source image.

Seamless Auto-Updates. Pull the latest open-source version from upstream without SSH access to the server.

LightRAG — Unified Project Memory (v1.3). Shared context and memory across all AI agents and sessions.

Open Claw — Business Orchestration (v1.4). A single control point for your entire business — manage projects, agents, and workflows from one place.

Skills Marketplace (v1.5). Extend your workspace with community-built AI skills at fractera.ai.


Star on GitHub    Fork on GitHub

Fractera ships with a built-in database and local S3-compatible object storage. Both are self-hosted services that require no cloud account and no monthly fee.


App Walkthrough

Fractera architecture includes built-in authentication with role-based access. The first registered user becomes the Architect with full project access. Additional users receive extended viewer permissions. The platform builds and serves the project in production mode.

Fractera architecture includes built-in authentication connected to its own database. Registration assigns roles automatically: the Architect receives full access to modify the project; users receive extended viewer permissions. The project is built and served in production mode.

Short video demonstrations:

Platform Activation — Launching Claude Code, Gemini, Codex, Qwen and Kimi in one terminal

Built-in Media Storage — Upload, crop, rename and preview images without leaving the workspace

Database from S3 in One Prompt — Claude Code reads object storage, extracts structured data from images, and creates a populated database table — no SQL written

Employees Page from One Prompt — Full CRUD page with image upload, crop, and object storage wired together by the AI from a single plain-language instruction


Tech Stack

Frontend: Next.js 16.2, React 19, Tailwind v4, shadcn/ui
Backend: Next.js API routes, Node.js bridge server (WebSocket), Express media service
Database: SQLite via better-sqlite3 — no external database required
Authentication: NextAuth v5 — email/password, guest mode, role-based access (architect / user / guest)
Object Storage: Local filesystem (storage/) — no cloud storage subscriptions
Media Service: Standalone HTTP service on port 3300 — upload, crop, favicon generation, PWA icons
Architecture: Parallel Slot Architecture with built-in error isolation


Star on GitHub    Fork on GitHub


Getting Started

You need a terminal. On Mac — it is called Terminal, find it with Spotlight (Cmd+Space, type "Terminal"). On Windows — press Win+R, type cmd, press Enter. On Linux — you already know.

Step 1. Install Claude Code.
Copy this line, paste it into the terminal, press Enter:

# Mac / Linux
curl -fsSL https://claude.ai/install.sh | bash

# Windows (PowerShell)
irm https://claude.ai/install.ps1 | iex

Already have Claude Code? Skip this step.

Step 2. Sign in to Claude.
Run this command and follow the instructions on screen:

claude auth

Step 3. Install and launch Fractera.
Copy this line, paste it into the terminal, press Enter:

curl -fsSL https://raw.githubusercontent.com/Fractera/ai-workspace/main/install.sh | bash

That is everything. The script will check your system, download Fractera, install all dependencies, and open it in your browser automatically.

The first account you register becomes the Administrator. You will see a coding workspace with all AI platforms ready to use.


Mobile Application — Product Manager Mode

Coming in v1.6 — native apps for iOS and Android are in development.

Fractera is built for two audiences: developers who want full control over the code, and product managers who need to ship products without touching infrastructure.

The App Store release is designed specifically for product managers. Download the app, open it, and a dedicated server is provisioned automatically — with all development dependencies, AI coding platforms, LightRAG global memory, and Open Claw agent orchestration pre-installed and ready. No configuration. No terminal. No deployment pipeline.

Describe the feature you need. The AI builds it. The moment you confirm the changes, they are live in production — instantly, with no build step, no dev mode, no staging environment. From idea to published product in seconds.

Fractera — App Store and Google Play


Star on GitHub    Fork on GitHub


Free Skills Marketplace

Earn up to 8 free skills from the Fractera marketplace. Send proof to admin@fractera.ai:

Fork this repository (+1 skill)
Star this repository (+1 skill)
Leave a review on fractera.ai (+1 skill)
Post on X (Twitter) with a link (+1 skill)
Write an article on Medium (+2 skills)
Write on dev.to or any dev blog (+2 skills)


Roadmap

v1.2 — Media Library, Database Browser, PWA icons, full agent documentation. (Current)
v1.3 — LightRAG: unified memory across all agents and sessions.
v1.4 — Open Claw: single control point for your entire business — projects, agents, workflows.
v1.5 — Skills Marketplace: community-built AI skills at fractera.ai.
v1.6 — Native mobile application for App Store and Google Play.

All updates are free for self-hosted users. For enterprise features including multilingual routing, see Fractera Pro.


Custom Development and Support

Fractera AI Workspace is open-source. The team is also available for custom engagements — bespoke AI applications, multilingual routing, parallel slot architecture, or proprietary builds on top of Fractera.

Email: admin@fractera.ai
CEO: Julia Kovalchuk
CTO: Roma Bolshiyanov (Armstrong)


FAQ

Can Fractera run on a low-end mobile phone?

Yes. The phone only renders terminal output — all computation runs on your server. Any browser-capable device works as a client.

Can I connect a cloud database, S3, or other external services?

There are no restrictions. Connect external services through environment variables the same way you normally would. Variables can be set directly in production via Settings → Configure inside the app without server access. The built-in SQLite database and local file storage are defaults that protect against unexpected cloud costs — the choice remains yours.

What is the main use case?

Production coding in the browser — from any device, including a phone. A product manager getting things done outside the office — commuting, traveling, on vacation, or even mid-workout on a treadmill — opens a new tab, builds a feature, sends it for review, and it goes live — without a laptop, without a local setup, without a deployment pipeline. The same applies to a desktop or laptop: open a browser, code, ship — no local installation, no environment setup, no deployment pipeline.

Is there a local mode?

Yes. Local mode is available and is the primary workflow for developers working from a desktop or laptop. The development experience is identical to any other project — run the server locally, edit code, iterate. The cloud-first, no-setup scenario is aimed at product managers who need to ship without touching infrastructure.

Why mobile?

Not because you build large applications on a phone — because you can do something meaningful on one. The constraint proves the platform works anywhere. A tool that runs on a phone runs everywhere.

How does onboarding work?

You subscribe via the App Store → a server is provisioned automatically → a subdomain is assigned → you open the URL. The first registered user becomes the Architect. Pick your AI platform — Claude Code, Codex, or any other — and start building.

Why do the same AI platforms produce better results inside Fractera?

Fractera uses the same platforms and models — that's true. What's different is the boilerplate the AI works on top of. It ships with a built-in database, S3-compatible object storage, performance optimization tools, error isolation architecture, and a structure optimized for large codebases and Skills Marketplace integration. The AI inherits all of this from the start, which means it makes far better decisions out of the box.

What specifically does Fractera handle so you don't have to?

You don't need to specify database schema design, API route structure, minimum request counts, or static generation strategy — Fractera's skeleton enforces correct patterns for all of these automatically. The AI orchestrates within a well-defined architecture rather than inventing one from scratch, which is where most unexpected complexity and cost comes from.

Who is the target audience?

Fractera was built for product managers and aims to become the go-to tool for entrepreneurs. Developers use it to significantly accelerate their workflow — especially on smaller projects.


Star on GitHub    Fork on GitHub


Changelog


v1.2.2 — 2026-04-27

  • Added reusable upload service (services/upload/) with built-in image crop support
  • Added CLAUDE.md instructions for AI agents to use the upload service directly — enables any AI model to build object-storage features from a plain-language prompt with no custom upload code

v1.2.1 — 2026-04-27 13:00

  • Crop format selector (16:9 / 1:1 / 9:16) moved inside the cropper — works correctly on mode switch
  • CLAUDE.md updated with full database and media storage API instructions for AI agents

v1.2.0 — 2026-04-26 23:59

Database Browser — inline SQLite table viewer and editor built into the workspace.

  • New "Database" button in the Settings menu opens a full-panel database browser
  • Left sidebar (250px, sticky) lists all application tables: users, sessions, accounts, verification_tokens
  • Right area shows all columns and rows with horizontal scroll support
  • Hover any row to reveal: pencil icon per cell, delete button at far right
  • Pencil opens an edit modal with context-aware input type:
    • roles column — multi-checkbox selector (architect / user / guest), stored as JSON array
    • is_active — single select (1 / 0)
    • provider — single select (credentials / google / github / guest)
    • locale — single select (en / ru / es / fr / de / zh)
    • All other columns — free textarea
  • Delete row with confirmation overlay (one row at a time)
  • All edits and deletes show toast feedback
  • API routes secured: table names validated against sqlite_master, column names validated against PRAGMA table_info — no SQL injection possible
  • Media database (services/media/data/media.db) is intentionally separate and not shown here

v1.1.0 — 2026-04-26 23:00

Media Library — standalone media service and full asset management system.

  • New standalone HTTP service services/media/ running on port 3300, isolated from the main Next.js app
  • SQLite database for media metadata: title, description, original filename, MIME type, dimensions, duration, storage key
  • Image upload with built-in canvas-based cropper — three aspect ratio modes: 16:9 horizontal, 1:1 square, 9:16 vertical
  • Video upload with direct storage (no crop)
  • Media library panel in workspace Settings menu — list view with search, preview, copy URL, rename, delete
  • Search across title, description, original filename and file URL with relevance-based sorting
  • Inline preview popup for images and videos directly in the panel
  • Per-file edit panel (pencil icon) for setting custom title and description independently from original filename
  • Delete confirmation flow to prevent accidental removal
  • Copy URL button with clipboard toast feedback
  • Favicon and PWA icon generation from a single square source image: favicon.ico (16+32px combined), favicon-16/32.png, apple-touch-icon.png (180×180), icon-192/512.png, og-image.jpg (1200×630), manifest.json
  • Project is PWA-ready at the icon level — manifest and all required icon sizes generated automatically
  • CLAUDE.md and AGENTS.md updated with full media service API documentation for AI agents
  • All three services (app, bridge, media) start together via single npm run dev from repo root

v1.0.0 — 2026-04-26 20:00

Initial public release of Fractera AI Workspace — a self-hosted, open-source platform for running multiple AI coding agents in a single unified workspace.

  • Multi-platform terminal workspace: Claude Code, Codex, Gemini CLI, Qwen Code, Kimi Code, Open Code (OpenRouter)
  • Parallel interactive terminal sessions — switch between agents without losing context
  • Single bridge server process manages all platform WebSocket connections on ports 3200–3206
  • Built-in authentication: email/password registration, guest mode, architect role (first registered user)
  • Role-based access control — architect gets coding workspace, users get standard access
  • Data export/import — full backup and restore of SQLite database and storage files as a single zip
  • Safe import merge — incoming data is merged with existing records, nothing is overwritten
  • Auto-update from upstream GitHub repository via UI button, no SSH required
  • Settings panel with environment variable editor — configure API keys, title, theme without touching files
  • Info panel with live README rendering from GitHub or local file
  • Proxy-based route protection (Next.js 16 native, no middleware.ts)
  • Dark/light/system theme switcher with persistent preference
  • Full shadcn/ui component library integrated
  • Toast notifications wired globally via root layout

Built for developers who value independence.

About

Open-source, self-hosted AI coding workspace. Run Claude Code, Gemini CLI, Codex, Qwen, Kimi, and 300+ OpenRouter models in parallel interactive terminals. Keep your code private on your own server ($2 VPS) with built-in auth and SQLite. Zero cloud lock-in, no managed DB subscriptions. Your code, your server, your AI. Use your AI subscriptions.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages