Nataris — P2P inference via Android phones, add as a custom provider in Continue #12211
Sharrmavishal
started this conversation in
Models + Providers
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Built this over three months with a two-person team. Since Continue supports custom OpenAI-compatible providers, figured this would be useful to share.
What Nataris is
A P2P inference marketplace. Android phones run open-weight models locally (Qwen 2.5 0.5B, Llama 3.2 1B) and serve API requests via a standard OpenAI-compatible endpoint. Phone owners get paid per token. You get inference without managing any servers.
No prompt logging. No content filtering. No model training on your queries.
Continue config
Add this to your
config.json:{ "models": [ { "title": "Nataris Fast", "provider": "openai", "model": "nataris-fast", "apiBase": "https://api.nataris.ai/v1", "apiKey": "YOUR_NATARIS_API_KEY" }, { "title": "Nataris Balanced", "provider": "openai", "model": "nataris-balanced", "apiBase": "https://api.nataris.ai/v1", "apiKey": "YOUR_NATARIS_API_KEY" } ] }nataris-fast= Qwen 2.5 0.5B (quick, ~5s)nataris-balanced= Llama 3.2 1B (better quality, ~15-20s)One thing to note: keep streaming enabled. Device cold-starts can cause timeouts on non-streaming requests.
Where we are
21 provider devices on the network, 2,775 inference jobs completed, 350K+ tokens processed in closed beta. Android app just went live on Google Play.
Works well for code completions, chat, longer workflows — anything where 5–20s latency is acceptable.
$5 free credits on signup, no card needed.
API: https://api.nataris.ai/v1
Docs: https://api.nataris.ai/docs
Provider app (earn by running models on your Android): https://play.google.com/store/apps/details?id=ai.nataris.app
Beta Was this translation helpful? Give feedback.
All reactions