Skip to content

ryash14/SmartDocAI

Repository files navigation

Run SmartDoc on Target Machine (Completely Offline)

This guide explains how to set up and run the SmartDoc AI Q&A Assistant on a target machine with no internet access, using Docker.


What is SmartDoc?

SmartDoc is an offline, PDF-based Q&A system that uses local large language models (LLMs) like llama3, phi3, etc., and local embeddings. All processing is done on your machine—no internet, no external APIs.


Prerequisites

  • Docker Desktop must be installed and running.
    Download: https://www.docker.com/products/docker-desktop

  • This entire project directory must be copied from a source machine, including:

    • .ollama directory containing models
    • local_models/ with reranker
    • demo-rag-chroma/ (optional; will be auto-generated)

Folder Structure

Ensure the folder structure looks like this:

SmartDoc/
├── app.py
├── requirements.txt
├── Dockerfile
├── docker-compose.yml
│
├── .ollama/                         # Contains pre-downloaded LLMs
│   └── models/
│       ├── blobs/
│       └── manifests/
│
├── demo-rag-chroma/                # Vector store (auto-generated if missing)
│
├── local_models/
│   └── ms-marco-MiniLM-L-6-v2/
│       ├── config.json
│       ├── pytorch_model.bin
│       ├── special_tokens_map.json
│       ├── tokenizer_config.json
│       └── vocab.txt

Do not rename or delete .ollama or local_models.


How to Run

Step 1: Build the Docker Image

Open a terminal inside the SmartDoc directory and run:

docker compose build

Step 2: Start the Application

docker compose up

Wait until you see a message similar to:

smartdoc-1  |   URL: http://0.0.0.0:8501

Access the Application

Open your browser and go to:

http://localhost:8501

Usage

  1. Upload a PDF document using the interface
  2. Choose an LLM from the dropdown menu (e.g., LLaMA3, Phi3, etc.)
  3. Type in your question related to the document
  4. Receive a contextual and accurate response

Offline Functionality

  • All models are run locally via Ollama
  • Embeddings are generated with local embedding models
  • No API keys or internet connection are required
  • Once set up, it works fully offline

Notes

  • The first launch may take a few moments as services initialize
  • If anything breaks, try restarting Docker or running docker compose down then docker compose up again

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published