Skip to content

ocentra/TabAgentDist

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tab Agent 🚀

Your AI Assistant. Your Data. Your Control.

Made with ❤️ Privacy First Multi-Platform


🌟 What is Tab Agent?

Tab Agent is a privacy-first AI browser assistant that brings the power of artificial intelligence directly to your browser—without sending your data to the cloud.

Unlike traditional AI assistants that track everything you do, Tab Agent runs entirely on your machine, using your hardware, giving you complete control over your data and AI interactions.

Why Tab Agent?

The Problem:

  • 🕵️ Big Tech tracks every search, every click, every interaction
  • 🔒 Your private data is used to train AI models without consent
  • 💰 Monthly subscriptions for AI features that should be yours
  • 🌐 Dependency on cloud services that can disappear or change pricing

Our Solution:

  • 100% Local Processing - Your data never leaves your machine
  • Zero Tracking - No analytics, no telemetry, no surveillance
  • Multiple AI Options - Choose how you want to run AI
  • No Subscriptions - One-time setup, lifetime use
  • Open Distribution - Transparent, inspectable, trustworthy

🎯 Features

🤖 Three Ways to Run Tab Agent

Tab Agent gives you flexibility based on your privacy and performance needs:

1. Browser-Only Mode 🌐 (Zero Install)

Run Tab Agent entirely in your browser:

  • No installation required - Works immediately after adding extension
  • WebGPU/WASM models - Phi-3.5, SmolLM2, Qwen3 run in browser
  • IndexedDB storage - Chat history, model cache, all local in browser
  • Chunked streaming - Efficient data handling
  • Perfect for trying Tab Agent - See how powerful web-only AI can be!
  • Complete privacy - Everything in browser, nothing leaves your machine

What You Get:

  • Chat with AI using browser models
  • Summarize web pages
  • Basic automation tasks
  • All stored locally in IndexedDB

2. Native App Mode 🖥️ (Maximum Power)

Install the native app to unlock full potential:

  • Direct data transfer - No IndexedDB limits, native app handles storage
  • Full system resources - Access ALL your RAM, VRAM, and GPU
  • LM Studio Integration - Run any GGUF model with llama.cpp:
    • Run small models faster than browser
    • Run large models that don't fit in browser
    • RAM/VRAM split for optimal performance
    • Model management through LM Studio
  • BitNet Support - Memory-efficient 1-bit models (8x less memory)
  • Advanced Agents - Specialized agents via /act endpoint:
    • Web scraping agents
    • Research agents
    • Automation agents
    • Custom tool-using agents
  • File system access - Beyond browser sandbox
  • Complete privacy - Everything stays on your machine

What You Get:

  • Everything from Browser Mode, PLUS:
  • Powerful local models (Llama 3.1 70B, Mistral, etc.)
  • Advanced agentic workflows
  • Unlimited storage (no IndexedDB limits)
  • Desktop automation capabilities
  • Professional-grade AI performance

3. Cloud API Mode ☁️ (Optional Convenience)

Add cloud APIs when you need specific models or capabilities:

  • Direct Mode - Connect your own API keys (OpenAI, Google AI, OpenRouter)
  • Privacy-Proxied Mode (Coming Soon) - Revolutionary privacy layer:
    • 🔒 Your identity masked through our privacy proxy
    • 🔒 Sensitive data replaced with IDs using local AI
    • 🔒 Prompts reformatted before sending to cloud
    • 🔒 Responses decoded back to your original context
    • 🔒 Cloud provider never sees your real data
  • Use strategically - Cloud for specific tasks, local for everything else

What You Get:

  • Everything from Native Mode, PLUS:
  • Access to GPT-4, Claude, Gemini, etc.
  • Latest cutting-edge models
  • Specialized capabilities (vision, audio, etc.)
  • Optional privacy protection (proxied mode)

🎨 Intelligent Features

📝 Smart Web Assistant

  • Summarize anything - Articles, PDFs, documentation, research papers
  • Chat with pages - Ask questions about any webpage
  • Contextual search - Find information across your browsing history
  • Multi-document analysis - Compare and synthesize information

🤖 Agentic Workflows

  • Autonomous browsing - AI that can navigate and research for you
  • Task automation - Complete multi-step workflows automatically
  • Intelligent scraping - Extract structured data from websites
  • Research assistant - Gather information from multiple sources
  • Computer use agent (Coming Soon) - Desktop automation with your permission

🌐 Next-Gen Privacy & Network (Coming Soon)

  • Privacy-Proxied APIs - Use cloud models without exposing your identity
  • P2P Direct Communication - Collaborate with other users directly
  • P2P GPU Sharing - Earn by sharing resources, or access bigger models:
    • 💰 Monetize your idle GPU time
    • 🚀 Run models beyond your hardware limits
    • 🔒 Privacy-preserved distributed computing

🧠 Advanced AI Capabilities

  • RAG (Retrieval-Augmented Generation) - Build knowledge bases from your browsing
  • Memory-efficient models - BitNet 1.58 uses 8x less memory
  • Multi-modal support - Text, images, and documents
  • Streaming responses - Real-time AI generation

🔌 Seamless Integration

  • Multi-browser support - Chrome, Edge, Firefox, Opera, Brave, Vivaldi
  • Multi-platform - Windows, macOS, Linux
  • Beautiful UI - Modern design with dark mode
  • Fast & responsive - Optimized for performance

🚀 Quick Start

Installation

Choose your platform:

Windows

# Download and run installer
# Visit: https://github.com/ocentra/TabAgentDist/releases/latest
# Download: NativeApp/installers/windows/install-gui.ps1
# Right-click > Run with PowerShell

macOS

# One-liner install
curl -fsSL https://raw.githubusercontent.com/ocentra/TabAgentDist/main/NativeApp/installers/macos/install-tabagent.sh | bash

# Or download and run manually
chmod +x install-tabagent.sh && ./install-tabagent.sh

Linux

# One-liner install
curl -fsSL https://raw.githubusercontent.com/ocentra/TabAgentDist/main/NativeApp/installers/linux/install-tabagent.sh | bash

# Or download and run manually
chmod +x install-tabagent.sh && ./install-tabagent.sh

What Gets Installed?

  1. Browser Extension - The Tab Agent interface in your browser
  2. Native Messaging Host - Secure bridge for system access
  3. LM Studio (Optional) - Local AI server for powerful models

Installation Locations:

  • Windows: %LOCALAPPDATA%\TabAgent
  • macOS: ~/Applications/TabAgent
  • Linux: ~/.local/share/tabagent

💡 How It Works

Three-Tier Architecture

MODE 1: Browser-Only (Zero Install)
┌─────────────────────────────────────────┐
│     Browser Extension                   │
│  ┌─────────────────────────────────┐   │
│  │ UI Layer (Your Interface)       │   │
│  │ • Chat, summarize, automate     │   │
│  └─────────────┬───────────────────┘   │
│                │                        │
│  ┌─────────────▼───────────────────┐   │
│  │ Data Layer (IndexedDB)          │   │
│  │ • Chat history, cache, settings │   │
│  └─────────────┬───────────────────┘   │
│                │                        │
│  ┌─────────────▼───────────────────┐   │
│  │ AI Layer (WebGPU/WASM)          │   │
│  │ • Phi-3.5, SmolLM2, Qwen3       │   │
│  │ • Chunked streaming             │   │
│  └─────────────────────────────────┘   │
└─────────────────────────────────────────┘


MODE 2: Native App (Maximum Power)
┌─────────────────────────────────────────┐
│     Browser Extension                   │
│  • Beautiful UI                         │
│  • User interaction                     │
└───────────┬─────────────────────────────┘
            │
            │ Native Messaging Protocol
            │ (Secure, Sandboxed)
            │
┌───────────▼─────────────────────────────┐
│    Native App (System Bridge)           │
│  ┌─────────────────────────────────┐   │
│  │ Data Storage (No IndexedDB)     │   │
│  │ • Unlimited storage             │   │
│  │ • File system access            │   │
│  └─────────────┬───────────────────┘   │
│                │                        │
│  ┌─────────────▼───────────────────┐   │
│  │ AI Backends                     │   │
│  │ ┌─────────────┐ ┌─────────────┐ │   │
│  │ │ Small Models│ │  LM Studio  │ │   │
│  │ │ (Local CPU) │ │ (RAM/VRAM)  │ │   │
│  │ │ • WebGPU    │ │ • Llama.cpp │ │   │
│  │ │ • Served by │ │ • Large     │ │   │
│  │ │   Native    │ │   models    │ │   │
│  │ └─────────────┘ └─────────────┘ │   │
│  │ ┌─────────────────────────────┐ │   │
│  │ │ BitNet (1-bit quantized)    │ │   │
│  │ │ • 8x less memory            │ │   │
│  │ └─────────────────────────────┘ │   │
│  │ ┌─────────────────────────────┐ │   │
│  │ │ Agentic Layer (/act)        │ │   │
│  │ │ • Specialized agents        │ │   │
│  │ │ • Tool-using capabilities   │ │   │
│  │ │ • LM Studio orchestrates    │ │   │
│  │ └─────────────────────────────┘ │   │
│  └─────────────────────────────────┘   │
└─────────────────────────────────────────┘


MODE 3: Cloud API (Optional)
┌─────────────────────────────────────────┐
│     Browser Extension                   │
└───────────┬─────────────────────────────┘
            │
     ┌──────┴──────┐
     │             │
┌────▼────┐   ┌───▼──────────────────────┐
│ Direct  │   │ Privacy-Proxied (Soon)   │
│ API     │   │ ┌──────────────────────┐ │
│ Calls   │   │ │ Local AI Pre-Process │ │
└────┬────┘   │ │ • Mask identity      │ │
     │        │ │ • Replace sensitive  │ │
     │        │ └──────────┬───────────┘ │
     │        │            │             │
     │        │ ┌──────────▼───────────┐ │
     │        │ │ Cloudflare Proxy     │ │
     │        │ └──────────┬───────────┘ │
     └────────┴────────────┘             │
                  │                      │
            ┌─────▼──────────────────────▼─┐
            │ Cloud Providers              │
            │ • OpenAI, Google, OpenRouter │
            └──────────────────────────────┘

Privacy-First Design

What We Track: Nothing. Zero. Nada.

  • ❌ No analytics
  • ❌ No telemetry
  • ❌ No phone-home
  • ❌ No user data collection
  • ❌ No cloud dependencies (unless you choose cloud APIs)

What You Control: Everything.

  • ✅ Choose which AI backend to use
  • ✅ Run models locally or use cloud APIs
  • ✅ All data stays on your machine (local mode)
  • ✅ Inspect the code anytime
  • ✅ Uninstall completely at any time

🎮 Usage

Getting Started

  1. Install the extension - Load in your browser

  2. Choose your AI backend:

    • Browser: Quick start, no setup needed
    • Native: Install native app for full power
    • API: Add API keys for cloud models
  3. Start using AI:

    • Summarize web pages
    • Chat with documents
    • Search your browsing history
    • Automate web tasks

Example Use Cases

📚 Research & Learning

"Summarize this research paper and explain the key findings"
"Compare these three articles and find common themes"
"Extract all statistics from this page"

💼 Productivity

"Monitor this page for changes and notify me"
"Extract contact information from this directory"
"Summarize my last 10 browsing sessions"

🤖 Automation

"Find the best price for this product across 5 websites"
"Collect all job postings from this page"
"Track news about [topic] and summarize daily"

🔒 Privacy & Security

Our Commitment

Privacy is not a feature—it's our foundation.

Every design decision prioritizes your privacy:

  1. Local-First Processing

    • Browser models run in your browser (WebGPU/WASM)
    • Native models run on your machine (LM Studio/BitNet)
    • No data sent to our servers (we don't have any!)
  2. Transparent Operation

    • Open distribution repository
    • Inspect all installer scripts
    • Review extension code
    • No hidden behavior
  3. User Control

    • You choose which AI backend to use
    • You control what data the AI sees
    • You can disconnect or uninstall anytime
    • No lock-in, no dependencies
  4. Secure Communication

    • Native messaging uses Chrome's secure protocol
    • Extension ↔ Native app communication is sandboxed
    • No network access unless you explicitly use cloud APIs

What About Cloud APIs?

Tab Agent offers three levels of cloud API usage:

Level 1: Local Only (Maximum Privacy)

  • ✅ No cloud connection
  • ✅ Browser models or LM Studio only
  • ✅ Zero data leaves your machine

Level 2: Direct Cloud API (Standard)

If you choose to use cloud APIs directly:

  • ⚠️ Your prompts are sent to those providers (OpenAI, Google AI, etc.)
  • ⚠️ Subject to their privacy policies
  • ✅ You control which APIs to use
  • ✅ Can switch back to local anytime

Level 3: Privacy-Proxied Cloud API (Coming Soon - Revolutionary!)

Get cloud power with privacy protection:

  • 🔒 Identity Masking - Your identity hidden behind our privacy proxy
  • 🔒 Data Anonymization - Sensitive info replaced with IDs using local AI
  • 🔒 Prompt Reformatting - Local model pre-processes before sending
  • 🔒 Response Decoding - Answers translated back to your original context
  • Seamless - You get the answer you need
  • Private - Cloud provider never sees your real data
  • Smart - Local AI protects you automatically

Your choice, your privacy, your control.


🌐 Future Vision: Decentralized AI Network

P2P GPU Sharing (Revolutionary Concept - Coming Soon)

Imagine a world where:

For GPU Owners:

  • 💰 Monetize idle GPU time - Earn credits or payments when your GPU sits idle
  • 🤝 Help the community - Share resources with users who need more power
  • Automated - Set it and forget it; share when available

For Users:

  • 🚀 Access bigger models - Run models beyond your hardware limits
  • 💵 Pay only for what you use - Credits or micro-payments per task
  • 🔒 Privacy-preserved - Encrypted communication, no data retention
  • Faster inference - Distributed processing across multiple GPUs

How It Works:

  1. Users opt-in to share GPU resources
  2. Secure, encrypted P2P connections established
  3. Tasks distributed to available GPUs
  4. Results returned encrypted
  5. Credits exchanged automatically

Privacy Guarantees:

  • 🔒 End-to-end encryption
  • 🔒 No central data storage
  • 🔒 GPU providers never see your prompts
  • 🔒 Only encrypted computation fragments
  • 🔒 Results decrypted only on your machine

This creates a decentralized AI network where privacy and community benefit align!


🛠️ Technical Details

Supported Platforms

Platform Architecture Status
Windows 10/11 x64 ✅ Supported
macOS 10.15+ ARM64 (Apple Silicon) ✅ Supported
macOS 10.15+ x64 (Intel) ✅ Supported
Linux x64 ✅ Supported

Supported Browsers

  • ✅ Google Chrome
  • ✅ Microsoft Edge
  • ✅ Mozilla Firefox
  • ✅ Brave Browser
  • ✅ Opera
  • ✅ Vivaldi

System Requirements

Minimum (Browser Models Only):

  • 4GB RAM
  • Modern browser with WebGPU support
  • 2GB free disk space

Recommended (Native App + Local Models):

  • 8GB+ RAM
  • 4GB+ VRAM (for GPU acceleration)
  • 10GB+ free disk space (for models)
  • LM Studio or compatible local AI server

📦 What's Included

Browser Extension

  • Modern, responsive UI with dark mode
  • Chat interface for AI interactions
  • Model management and switching
  • Settings and configuration
  • History and context management

Native Messaging Host

  • Secure bridge between browser and system
  • Access to full system resources (RAM, VRAM, GPU)
  • Local AI model integration
  • File system operations
  • Process management

Optional Components

  • LM Studio - Local AI server (auto-installed if you choose)
  • BitNet Models - Memory-efficient 1-bit quantized models
  • Cloud API Connectors - OpenRouter, OpenAI, Google AI (if you want)

🔧 Configuration

AI Backend Selection

Tab Agent adapts to your needs:

Just Starting? → Use Browser Models

  • No installation needed
  • Works immediately
  • Perfect for basic tasks

Want More Power? → Install Native App

  • One-click installer
  • Access to LM Studio integration
  • Run powerful local models

Need Specific Models? → Add Cloud APIs

  • Optional API key configuration
  • Access to GPT-4, Claude, Gemini, etc.
  • Use only when needed

Model Management

  • Browser Models: Automatically cached in IndexedDB
  • Native Models: Managed through LM Studio
  • Custom Models: Add any HuggingFace ONNX model
  • Cloud Models: Configure API keys in settings

🤝 Support & Community

Getting Help

Frequently Asked Questions

Q: Is Tab Agent really free? A: Yes! The core functionality is completely free and open. Advanced enterprise features and premium services may be offered in the future, but the core will always remain accessible.

Q: Do I need to install anything? A: Not necessarily! Tab Agent works in browser-only mode. The native app is optional but unlocks more powerful features.

Q: What data do you collect? A: Nothing. Zero data collection. No analytics. No tracking. Your data is yours.

Q: Can I use my own AI models? A: Absolutely! Use LM Studio, add custom HuggingFace models, or bring your own API keys.

Q: Is this open source? A: This distribution repository is open for transparency. You can inspect all installer scripts and the browser extension code.

Q: How do I uninstall? A: Run the uninstall script in your installation directory, or simply delete the Tab Agent folder. Clean and simple.


🚀 Roadmap

Current Features (v1.0)

  • ✅ Multi-backend AI support (Browser/Native/API)
  • ✅ LM Studio integration
  • ✅ BitNet memory-efficient models
  • ✅ Web page summarization
  • ✅ Chat with pages
  • ✅ Multi-browser support
  • ✅ Cross-platform installers

Coming Soon

  • 🔜 Privacy-Proxied Cloud API - Use cloud models with identity masking and data protection
  • 🔜 Enhanced agentic workflows - More autonomous task completion
  • 🔜 Multi-modal support - Images, audio, video understanding
  • 🔜 Advanced RAG capabilities - Better knowledge base management
  • 🔜 Desktop automation - Computer Use Agent for system-level tasks
  • 🔜 P2P Direct Communication - Connect directly with other users
  • 🔜 P2P GPU Sharing (Revolutionary) - Earn credits by sharing GPU resources:
    • 💰 Get paid or earn credits for helping others with bigger AI tasks
    • 🤝 Access more powerful models by pooling community resources
    • 🔒 Secure, encrypted, privacy-preserving
    • ⚡ Faster inference through distributed computing
  • 🔜 Plugin ecosystem - Extend Tab Agent with custom functionality

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


🙏 Credits

Built with passion by Ocentra

We believe in a future where AI empowers individuals without compromising privacy.

Technologies We Love

  • 🎨 Modern web technologies (TypeScript, Tailwind CSS)
  • 🧠 Transformers.js for browser AI
  • 🖥️ LM Studio for local inference
  • 🔬 BitNet for memory-efficient models
  • 🔐 Native messaging for secure communication

🌐 Links


💖 Support the Project

If Tab Agent helps you take back control of your digital life, consider:

  • Star this repository - Help others discover Tab Agent
  • 🐛 Report bugs - Help us improve
  • 💬 Share feedback - Tell us what you need
  • 🌍 Spread the word - Privacy matters

Tab Agent - Your AI, Your Data, Your Control 🚀

Made with ❤️ for 🔒 Privacy & Security


📋 Quick Links

Resource Link
📥 Download Latest Release
📖 Documentation Wiki
🐛 Bug Reports Issues
💬 Community Discussions
🌐 Website ocentra.ca

About

Distribution of Tab Agent Browser extension and Native App

Resources

License

Stars

Watchers

Forks

Packages

No packages published