A gRPC micro-service that returns ranked user and project recommendations using vector similarity search against a Qdrant vector database.
This service is part of the FindMe micro-services architecture. It is a read-only service — it does not write or modify any vectors. It queries the Qdrant collections maintained by the embedding service to find the most semantically similar users or projects for a given query.
The service exposes a single RecommendationService with two RPCs and is called directly by the back-end service when a user requests recommendations.
Project recommendations for a user (ProjectRecommendation):
- Retrieves the user's
profilevector from theusersQdrant collection. - Queries the
projectscollection using that vector against thedescriptionnamed vector. - Filters out projects owned by the requesting user and projects with
status: false. - Returns the top 15 results as a map of
project_id → similarity_score.
User recommendations for a project (UserRecommendation):
- Retrieves the project's
descriptionvector from theprojectsQdrant collection, along with itsuser_idpayload. - Queries the
userscollection using that vector against theprofilenamed vector. - Filters out the project owner and users with
status: false(unavailable). - Returns the top 15 results as a map of
user_id → similarity_score.
Backend (Go)
│ gRPC
▼
Recommendation Service (Python)
│ HTTP
▼
Qdrant (reads users + projects collections)
| Method | Request | Response | Description |
|---|---|---|---|
ProjectRecommendation |
RecommendationRequest |
RecommendationResponse |
Returns ranked projects for a given user ID |
UserRecommendation |
RecommendationRequest |
RecommendationResponse |
Returns ranked users for a given project ID |
RecommendationRequest
message RecommendationRequest {
string id = 1; // user_id or project_id
}RecommendationResponse
message RecommendationResponse {
bool success = 1;
map<string, float> res = 2; // id -> similarity score
}gRPC server reflection is enabled — you can inspect the API with grpcurl or Postman.
Both RPCs apply the following filters at query time to ensure quality results:
| Filter | ProjectRecommendation | UserRecommendation |
|---|---|---|
| Exclude owner | Projects where user_id == requesting_user_id are excluded |
The project owner (user_id from payload) is excluded |
| Availability | Only projects with status: true are returned |
Only users with status: true are returned |
| Limit | Top 15 results | Top 15 results |
| Concern | Technology |
|---|---|
| Language | Python 3.12 |
| gRPC Framework | grpcio 1.76.0 |
| Vector Database | Qdrant (shared with embedding service) |
| Containerization | Docker + Docker Compose |
.
├── db/
│ └── db.py # Qdrant client init and collection setup
├── generated/ # Auto-generated gRPC code (do not edit)
│ ├── rec_pb2.py
│ ├── rec_pb2_grpc.py
│ └── rec_pb2.pyi
├── proto/
│ └── rec.proto # Protobuf service definition
├── services/
│ └── rec.py # RecommendationService implementation
├── docker-compose.yml
├── Dockerfile
├── generate.sh # Proto code generation script
├── main.py # Server entrypoint
└── requirements.txt
- Docker & Docker Compose
- The
findme-shared-networkDocker network must exist - The Qdrant instance from the embedding service must be running and reachable — this service shares the same Qdrant, it does not run its own
Create a .env file in the project root:
QDRANT_HOST=qdrant
QDRANT_PORT=6333The
QDRANT_HOSTshould point to the same Qdrant instance used by the embedding service.
# Create the shared network (only needed once)
docker network create findme-shared-network
# Start the recommendation service
docker compose up -d --buildThe gRPC server will be available at [::]:8050 on the shared network.
pip install -r requirements.txt
# Generate proto files
chmod +x generate.sh && ./generate.sh
# Set env vars and run
QDRANT_HOST=localhost \
QDRANT_PORT=6333 \
python main.pyIf you modify proto/rec.proto:
chmod +x generate.sh
./generate.shThis regenerates the files in generated/ and automatically fixes the relative import path in rec_pb2_grpc.py.
This service is intentionally lightweight — it has no Qdrant or Ollama of its own. It depends on the Qdrant instance managed by the embedding service being on the same Docker network. Make sure the embedding service is deployed and its Qdrant has data before this service starts receiving requests.
See LICENSE.