Lotus turns your computer into an AI-controlled remote workstation. Send natural-language commands from Telegram, and Lotus operates your desktop β opening files, playing music, taking screenshots, running research, and chatting with you using a private, local LLM. No cloud, no data leakage, no surprises.
| Windows | macOS | |
|---|---|---|
| Latest version | v2.2.0-STABLE | v2.0.1 |
| Installer | LotusSetup.exe (Inno Setup, autonomous) |
Lotus-2.0.1.dmg (drag-to-install) |
| GUI | System-tray control panel | Native Swift menu-bar app (Lotus.app) |
| Background service | Hidden pythonw.exe via scheduled task |
launchd user agent (com.lotus.botservice) |
| Source | Windows-MCP/ |
Mac-MCP/ |
| Lead | @SatyamPote | @JayashBhandary |
| Read this for install / dev | Windows guide β | macOS guide β |
TL;DR for first-time users:
- Get a Telegram bot token from @BotFather and your Telegram user ID from @userinfobot.
- Install Ollama and pull a small model:
ollama pull qwen2.5:3b.- Follow the platform-specific install guide above.
- DM your bot. Try
dashboard,take screenshot, orplay lo-fi.
- What is Lotus?
- Feature Tour
- Command Reference
- Architecture
- Configuration
- Privacy & Security
- Repository Layout
- Contributors
- License
Lotus is a cross-platform AI agent that bridges your computer and Telegram. It runs as a background service, listens to messages from a small set of allowed Telegram users, interprets them, and translates them into real actions on your machine β file lookups, app automation, music playback, screen capture, deep research, and conversational AI.
The intelligence layer is fully local: Lotus integrates with Ollama to run open-weight LLMs (Llama, Qwen, Phi, etc.) on your own hardware. Nothing about your files, your queries, or your conversations leaves the machine β except, of course, the Telegram messages you choose to send.
The two flavors share the same philosophy and the same MCP-style tool surface, but each is implemented natively for its platform. Pick the right guide above for installation, build instructions, troubleshooting, and platform-specific architecture details.
These features are available on both Windows and macOS.
Commands are matched against a hardened priority chain β System >
Files > Music > Research β before the LLM ever sees them. This
guarantees that literal commands like open report.pdf or volume up
execute deterministically and don't get hallucinated into something
else. The AI is invoked only for genuinely ambiguous or open-ended
queries.
The research <topic> command runs a tiered pipeline:
- Wikipedia β primary structured source, fast and citation-friendly
- DuckDuckGo Instant Answer API β fallback for current events
- Web scraping with
markdownifyβ final fallback for arbitrary URLs
Results are aggregated, the LLM produces a structured summary, and you receive a professional PDF report plus inline images via Telegram.
Every action emits a spoken confirmation in a clear, natural female voice. Local TTS plays through your speakers; the same audio is sent as a Telegram Voice Note for remote acknowledgement when you're away from the machine.
Lotus reserves a 2 GB sandbox under your user data dir for downloads, research artifacts, and screen recordings. An LRU cleanup keeps it under quota β your disk doesn't fill up if you forget about it.
Single-instance enforcement: only one player is ever alive at a time, so queueing a new song cleanly stops the previous one.
play <query>β searches and streams viayt-dlppause/resume/stopnext/prevβ through the session queuevolume up/volume downβ system mixer hooks
Default model: qwen2.5:3b β runs comfortably on a recent MacBook or
any PC with 8 GB RAM. Want bigger? Swap to llama3.1:8b, phi4, or
any model in the Ollama library β Lotus
picks it up without code changes.
take screenshotβ instant PNG of the current desktoprecord screen <seconds>β captures video (ffmpeg under the hood)download <youtube-url>β pulls audio or video viayt-dlp
Every reply is wrapped in a clean monospaced frame with a header banner. File listings, dashboards, and research summaries are visually distinct and pleasant to read on mobile.
A condensed catalog. Type help to your bot for an in-chat version.
All commands work identically on Windows and macOS.
| Command | What it does |
|---|---|
find <query> |
Fuzzy search across user dirs and the storage sandbox |
open <filename> |
Open the file in its default application |
send <filename> |
Upload the file to Telegram |
ls |
List the current working directory |
cd <path> |
Change the bot's working directory |
tree |
Print a directory tree (depth-limited) |
| Command | What it does |
|---|---|
play <song name> |
Search + stream audio |
pause / resume / stop |
Standard playback control |
volume up / down |
System volume nudge |
next / prev |
Skip in the session queue |
now playing |
Show current track and elapsed time |
| Command | What it does |
|---|---|
research <topic> |
Wikipedia β DDG β scrape β PDF |
list research |
Most recent reports with timestamps |
say <text> |
Speak text via local TTS + Telegram voice note |
chat <prompt> |
One-shot LLM completion (Ollama) |
| Command | What it does |
|---|---|
dashboard |
Battery, CPU, RAM, disk, uptime, IP |
lock / sleep / shutdown |
Power management |
take screenshot |
Capture the desktop as PNG |
record screen <seconds> |
Capture a screen video |
Platform-specific commands (e.g.
whatsapp sendon Windows) are documented in the per-platform READMEs.
Lotus is a three-tier system on both platforms.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Telegram β β user
ββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββ
β long-poll updates
ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β bot_service.py (background) β
β β
β ββββββββββββββββββββ ββββββββββββββββββββββββββ β
β β telegram_bot β β control_api (HTTP) β β
β β - command parse β β - GET /api/status β β
β β - priority chainβ β - GET /api/logs β β
β ββββββββββ¬ββββββββββ β - POST /api/restart β β
β β ββββββββββββββββββββββββββ β
β ββββββββββΌββββββββββββββββββββββββββββββββββββββ β
β β MCP-style tool surface (mac_mcp / win_mcp) β β
β β - desktop (mouse, keyboard, screenshot) β β
β β - filesystem (find, open, ls, cd) β β
β β - media (yt-dlp, ffmpeg, mpv) β β
β β - research (wiki, DDG, scrape, PDF) β β
β β - tts / voice β β
β ββββββββββ¬ββββββββββββββββββββββββββββββββββββββ β
βββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββΌβββββββββββββ ββββββββββββββββββββββββ
β Ollama daemon β β Native GUI β
β (local LLM, http) β β Lotus.app / Tray β
β β β (status + control) β
ββββββββββββββββββββββββββ ββββββββββββββββββββββββ
- Telegram bot β
python-telegram-bot, long-polling, gated by an allowlist of user IDs. Anything from outside the list is dropped. - MCP server β
fastmcp-based tool surface that's exposed to both the bot loop and (optionally) external MCP clients like Claude Desktop. - Control API β a tiny
uvicornHTTP server onlocalhost:40510, used by the GUI to query status and trigger restarts. Bound to loopback only. - GUI β a thin client over the control API. The bot service is authoritative; the GUI never owns state.
- Ollama β out-of-process local LLM server. Lotus speaks to it over
HTTP at
http://127.0.0.1:11434.
The platform-specific process supervision and bundle layout are documented in each platform's README:
- Universal β the same client works on iOS, Android, web, and desktop.
- Free message API β no SMS / Twilio dependencies.
- Bot tokens are revocable β if a token leaks you regenerate it via BotFather.
- End-to-end optional β Lotus uses standard bot API, but you can layer a private channel or Telegram MTProxy if you want extra hop secrecy.
The schema is identical on both platforms. The location differs:
| Platform | Path |
|---|---|
| Windows | %LOCALAPPDATA%\Lotus\config.json |
| macOS | ~/Library/Application Support/Lotus/config.json |
| Field | Type | Description |
|---|---|---|
name |
string | Display name (used in greetings: "Hello <name>") |
telegram_token |
string | Bot token from BotFather |
allowed_user_id |
string | Comma-separated list of Telegram user IDs allowed to issue commands |
model_name |
string | Ollama model identifier β must already be ollama pull'd |
created_at |
string | ISO-8601 timestamp written by the wizard |
Example:
{
"name": "Jayash",
"telegram_token": "1234567890:ABC-defGhIjKlmNoPqRStUvWxYz1234567890",
"allowed_user_id": "1327255784,9876543210",
"model_name": "qwen2.5:3b",
"created_at": "2026-05-09 00:33:21"
}| Variable | Default | Effect |
|---|---|---|
LOTUS_CONTROL_PORT |
40510 |
Port the local control API listens on |
ANONYMIZED_TELEMETRY |
true |
Set to false to disable optional PostHog event reporting |
Platform-specific environment variables are documented in each platform's README.
PORT=40510 # or read from the platform's port file
curl http://127.0.0.1:$PORT/api/status # service health + uptime
curl http://127.0.0.1:$PORT/api/logs # last 100 log lines
curl http://127.0.0.1:$PORT/api/config # current config (token redacted)
curl -X POST http://127.0.0.1:$PORT/api/restart
curl -X POST http://127.0.0.1:$PORT/api/stopThe native GUI on each platform uses this same surface β there is no private API.
Lotus is designed to be private by default:
- β Local LLM only β Ollama runs on your machine. Prompts and conversations never touch a third-party API.
- β
Allowlist authentication β Telegram user IDs not in
allowed_user_idare silently ignored. The bot does not respond, log, or acknowledge them. - β
Loopback control API β bound to
127.0.0.1only; not exposed on any network interface. - β
No outbound telemetry by default for the macOS app's
installer steps. Set
ANONYMIZED_TELEMETRY=falsein.envto also disable the bot's optional PostHog events. - β
Token storage β
config.jsonis mode0600after the wizard writes it. The macOS Swift GUI redacts tokens in the Settings view.
| Concern | Mitigation |
|---|---|
| Bot token leaks | Revoke via BotFather, regenerate, rewrite config.json |
| Allowed user phone gets compromised | Remove their ID from allowed_user_id, restart |
| Local code execution by a permitted user | Lotus is a remote-control agent; trust the allowlist accordingly |
| Network sniffer on home wifi | Telegram traffic is TLS; control API is loopback |
| Malicious DMG / EXE | Verify the SHA-256 from the Release page against SHA256SUMS.txt |
Lotus/
βββ README.md β you are here (connector)
β
βββ Windows-MCP/ # Windows AI agent
β βββ README.md β Windows install + dev guide
β βββ ... # Inno Setup, tray app, command engine
β
βββ Mac-MCP/ # macOS native menu-bar app + bot
β βββ README.md β macOS install + dev guide
β βββ ControlPanel/ # Swift Package β Lotus.app source
β βββ src/mac_mcp/ # Python MCP server + Telegram bot
β βββ bot_service.py # bot service entry point
β βββ pyproject.toml
β βββ SETUP.md
β
βββ release-notes/ # per-release curated notes (vX.Y.Z.md)
β βββ README.md
β βββ TEMPLATE.md
β βββ v1.0.0.md
β βββ v2.0.0.md
β βββ v2.0.1.md
β
βββ .github/
βββ workflows/swift.yml # macOS build & release pipeline
βββ RELEASING.md # pipeline runbook
Lotus is the product of two leads, one on each platform:
![]() Satyam Pote Project creator Β· Windows lead @SatyamPote Designed and built the original Lotus agent, the Windows tray app, the priority routing engine, the multi-source research pipeline, and the Inno Setup deployment story. |
![]() Jayash Bhandary macOS lead @JayashBhandary π§ findjayash@gmail.com πΌ LinkedIn Β· πΈ Instagram Designed and built the macOS native menu-bar app (`Lotus.app`), the universal-binary build pipeline, the standalone DMG installer with bundled `uv` runtime, the writable runtime-dir architecture, and the GitHub Actions release workflow. |
Pull requests and issues are welcome. Please:
- Open an issue first for anything non-trivial β saves you from building something we'd want differently.
- For UI work, include a screenshot or short screen recording.
- For new MCP tools, add a docstring describing the user-facing command, expected arguments, and what state it touches.
- Keep curated release notes in
release-notes/up to date β they ship as the GitHub Release body.
This project is released under the MIT License β see
LICENSE for the full text.
The bundled uv binary used by the macOS installer is distributed
under the MIT/Apache-2.0 license by
Astral. Ollama models you pull are subject to their respective
upstream licenses.
Built for stability. Built for privacy. Built for both Windows and Mac.
πΈ

