SeeNot is the mobile screen time management assistant that you never saw before. 中文版
You tell it what you want to do — and what you want to avoid — and it watches your screen to keep you honest.
"Open Instagram, but don't let me scroll Reels." "I can use Reddit for 15 minutes — but only my programming subreddits." "Let me check YouTube only for learning purposes."
SeeNot listens, understands, and intervenes if you drift dynamically.
Note: We are now providing free AI keys for testing. Contact scrisqiu at hotmail.com for more information.
- Speak your intention — tap the floating button and say what you want to do in natural language (English, Chinese, or other languages).
- SeeNot parses it — an LLM converts your words into a structured rule: which app, what's allowed, what's off-limits, and for how long.
- Your session begins — SeeNot monitors your screen in the background using Android's Accessibility Service.
- It intervenes when needed — if you wander into territory you said you'd avoid, or isn't related to your intent, SeeNot nudges you back with enforced actions.
- You can correct it — if SeeNot gets a screen wrong, mark it as a false positive and add a short note. SeeNot uses that correction in later judgments for similar screens.
- Voice-first intent input — speak naturally; the AI figures out the rest
- AI screen analysis — vision model reads your screen to detect violations in context, not just by app name
- Multiple enforcement levels: Reminder notification / automatic navigation back / home return
- Flexible time limits — per-session caps, per-content caps, or daily totals
- Multilingual — UI and intent parsing support English and Chinese; designed to extend to 20+ languages
- Local-first — all session data stays on your device
Pre-built APKs are available on the Releases page. Download and install it, then follow the in-app setup.
You'll need to:
- Grant Accessibility Service permission so SeeNot can detect the current app and screen state.
- Grant "Display over other apps" permission so SeeNot can show the floating button and intervention overlays.
- Enter an OpenAI-compatible vision model API key. DashScope Qwen is recommended for lower cost and latency.
- Optionally enter a speech-to-text API key if you want voice input. Text input works without STT.
SeeNot is best tested on a real Android device. Some accessibility and overlay behaviors may differ on emulators.
SeeNot requires an AI API key to work. We are currently providing free API access for testing — contact me at scrisqiu at hotmail.com to get one.
Alternatively, you can register your own AI key: create an account, enable billing or complete any required verification, open the provider's API key page, create a new key, and paste it into SeeNot.
- OpenAI: create a key from the OpenAI API keys page: https://platform.openai.com/api-keys
- Anthropic: sign in to Anthropic Console, then create a key from Account Settings / API Keys: https://console.anthropic.com/ and docs: https://docs.anthropic.com/en/api/getting-started
- Gemini: create and manage keys in Google AI Studio: https://aistudio.google.com/app/apikey and guide: https://ai.google.dev/gemini-api/docs/api-key
- Qwen (low-cost):
- International: use Alibaba Cloud Model Studio international, then create a key from the API Key page. Official guide: https://www.alibabacloud.com/help/en/model-studio/get-api-key
- China route: use Alibaba Cloud China. Official guide: https://www.alibabacloud.com/help/zh/model-studio/get-api-key
- GLM (low-cost):
- International: use Z.ai, then create a key from the API Keys page. Official docs: https://docs.z.ai/guides/develop/http/introduction and key page: https://z.ai/manage-apikey/apikey-list
- China route: use Zhipu BigModel Open Platform. Official docs: https://docs.bigmodel.cn/ and example guide: https://docs.bigmodel.cn/cn/guide/develop/claude/introduction
Or, you may use any OpenAI-compatible provider with a VLM or even your self-hosted AI.
git clone https://github.com/RoderickQiu/seenot-app.git
cd seenot-appThen open the project in Android Studio and run it on a device or emulator (API 30+).
Or build from the command line:
./gradlew :app:assembleDebug
adb install app/build/outputs/apk/debug/app-debug.apkseenot/
├── app/ # Android application
│ └── src/main/java/com/seenot/app/
│ ├── ai/ # LLM intent parsing, screen analysis, STT
│ ├── domain/ # Session management, business logic
│ ├── data/ # Room database, repositories
│ ├── service/ # Accessibility & foreground services
│ └── ui/ # Jetpack Compose screens and overlays
└── ai-debugger/ # CLI tool for AI prompt development
SeeNot has no hosted backend. Screenshots taken for screen analysis are sent only to the AI provider you configure, and session history stays on your device in a local database.
Mozilla Public License 2.0 — see LICENSE for details.
You can use, modify, and distribute this software. If you modify MPL-licensed files, you must share those changes under the same license. You may combine this code with code under other licenses in a larger work.