A notebook app that you can host on your local machine for daily use. It is built with Rust and supports semantic search natively, making knowledge management much more efficient.
The project is still new. Please feel free to raise issues or contribute.
Reach me out at https://discord.gg/MXnzmRcDFh
- Support multi-user access. Each user can have their own private workspace.
- Support MCP server. You can use your favorite LLM client and models to perform your works.
- Support semantic search.
- Support importing webpages and also search them semantically. Great for researchers, students and anyone who needs to read documents.
- Support Vim key mappings. More keyboard shortcuts are yet to come!
- Built with
actix-weband async tech stacks. Blazingly fast! - More features are yet to come. Stay tuned!
This is the easiest way to get started. It sets up the Frontend, Backend, Qdrant database, and vLLM embedder service automatically.
-
Prerequisites:
- Install Docker Desktop.
- (Optional) If you have an NVIDIA GPU, ensure you have the NVIDIA Container Toolkit installed for GPU acceleration.
-
Configuration:
- Create a
.envfile in the root directory if you need to set environment variables (e.g.,HUGGING_FACE_HUB_TOKENfor accessing gated models). - The default
compose.yamlis configured to usevLLMwith GPU support. If you are running on CPU only, you may need to adjust theembedderservice incompose.yamlto use a CPU-compatible image (likeghcr.io/huggingface/text-embeddings-inference:cpu-1.5) or configure vLLM for CPU (experimental).
- Create a
-
Run: Open a terminal in the project root and run:
docker compose up --build
- The build process might take a few minutes the first time.
- Once running, access the app at:
http://localhost:3000
The Docker Compose setup uses a dedicated configuration file: backend/config.docker.json. This file is mounted into the backend container.
If you need to customize the backend (e.g., to use an external database, change logging levels, or modify embedder settings), you can edit backend/config.docker.json.
Note: If you change the service names in
compose.yamlor run services on different hosts, ensurebase_urlin this config matches your setup.
If you prefer to run services individually or on bare metal, follow the steps below.
This project relies on Docker to run itself as a web service. It relies on Qdrant as a vector database. Finally, it needs an embedding service to make vectorizations, which can be vLLM or any OpenAI-Compatible API.
- Have
Dockerinstalled. - Have a
Qdrantinstance hosted locally. - Have access to OpenAI-Compatible embedding service.
Docker isolates the service from the rest of your computer/server. Even if the service itself messes things up, you will still keep your computer/server clean.
This is why we chose to use Docker to ship this project.
You may refer to Docker's official tutorial for installation: https://docs.docker.com/get-started/get-docker/
Qdrant is required for OpenNote to process the notes you put in. Qdrant also requires Docker to setup. Usually, if you are on a Linux machine, you may use the following command to boot it up:
docker run -p 6333:6333 -p 6334:6334 \
-v "$(pwd)/qdrant_storage:/qdrant/storage:z" \
qdrant/qdrantThis will start Qdrant in the Docker you have just installed and create a data folder where you started the service. For example, if you are at /home/some_user, the data folder will be at /home/some_user/qdrant_storage. So please choose a desired location to put your data.
For detailed instructions on setting Qdrant up, please check out their official document: https://qdrant.tech/documentation/quickstart/
Any provider who provides OpenAI-Compatible embedding API services works with this project. Also, the following providers are supported: openai, cloudflare, cohere, deepinfra, gemini, jina, mistral, mixedbread, nomic, together, voyageai. You may also just use OpenAI's endpoints too.
You may also host a vLLM instance locally, which is totally free. Below is a script that I use to run a vLLM instance. Notice that it will still need to make use of Docker:
#! /bin/sh
echo "Removing existing vLLM container..."
docker rm -f vllm
echo "Starting vLLM container..."
docker run -d --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--name vllm \
--env "HUGGING_FACE_HUB_TOKEN=<Your huggingface token>" \ # this may not be required
--env "HF_ENDPOINT=https://hf-mirror.com" \ # use this, if you are in Mainland China
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--api-key <setup an api key> \
--model sentence-transformers/all-MiniLM-L6-v2 \
--dtype=half \
--gpu-memory-utilization=0.99 # adjust according to your needsYou may also refer to their official document for more in-depth setup tutorial: https://docs.vllm.ai/en/latest/deployment/docker/
In backend/config.prod.json (create one and copy and paste the json below if you don't see it):
{
"logging": {
"format": "json",
"level": "info"
},
"server": {
"host": "0.0.0.0",
"port": 8080,
"workers": 4
},
"identities_storage": {
"path": "./data/identities_storage.json"
},
"metadata_storage": {
"path": "./data/metadata_storage.json"
},
"backups_storage": {
"path": "./data/backups_storage.json"
},
"database": { // Configure Qdrant
"index": "notes", // You may just leave it, or put a cooler name here
"base_url": "http://192.168.0.116:6336", // The gRPC API endpoint of your Qdrant instance
"api_key": "meilimasterkey" // Ignore this. We haven't yet supported API key.
},
"embedder": { // Configure embedding service
"provider": "", // Leave it empty if you use a base url. Support: openai, cloudflare, cohere, deepinfra, gemini, jina, mistral, mixedbread, nomic, together, voyageai
"base_url": "http://192.168.0.101:8000/v1/embeddings", // The embedding service's API endpoint. Leave it empty if you use a provider.
"model": "sentence-transformers/all-MiniLM-L6-v2", // The model you want to use. If you are using it from a service provider, like OpenAI, you may refer to their official documents on which models are available.
"vectorization_batch_size": 100, // Increase this number if your OpenNote is too slow.
"encoding_format": "float", // Leave it as float
"dimensions": 384, // Refer to your service provider's document for your model's dimensionality.
"api_key": "" // API key
}
}Below is a json that you can paste into your client for using your OpenNote as a MCP server:
{
"mcpServers": {
"tPapVztNjfFjJxJUTXkVH": {
"name": "opennote",
"description": "",
"baseUrl": "http://localhost:8086/mcp",
"command": "",
"args": [],
"env": {},
"isActive": true,
"type": "streamableHttp",
"headers": {
"Authorization": "Bearer <your-username>"
}
}
}
}More tools are on the way!
The project comes with a build_and_deploy.sh script at the root. You need Docker installed to get it deploying the notebook for you. The script uses Docker multi-stage builds to compile both the Frontend (Flutter) and Backend (Rust), so you do not need to install Flutter or Rust toolchains on your host machine.
If you would like to specify a different place to store data. You may change the following:
DATA_DIR="/data/notes"to wherever you want.
Any issue being raised is a push to me to make the project more usable for you and other users. It is more than welcome to raise an issue in the issue tab of this project.
Any contribution is welcomed.
If you would like to add more features or fix bugs for this project, you may need to first be able to compile and test it locally.
To compile and run the Rust backend is rather simple. The Rust backend is located at ./backend. Just navigate to the directory and run cargo run there.
For the Dart frontend, the project uses fvm to manage the Flutter SDK and Dart. You may need to have fvm installed first. Then, just navigate to ./frontend and run fvm flutter run -d chrome to compile and run.
This project is released under MIT. The purpose of this project is to explore the possibility of semantic search, AI, and Rust in the notebook use case.
