mcp-plex
turns your Plex library into a searchable vector database that LLM agents can query.
It ingests Plex metadata into Qdrant and exposes search and recommendation
tools through the Model Context Protocol.
- Load Plex libraries into a Qdrant vector database.
- Hybrid dense & sparse vector search for media items.
- Recommend similar media from a reference item.
- GPU-enabled data loading and embeddings.
- Install
uv
. - Sync project dependencies (including dev tools) with:
uv sync --extra dev
Load sample data into Qdrant:
uv run load-data --sample-dir sample-data
Run continuously with a delay between runs:
uv run load-data --continuous --delay 600
The loader exposes CLI flags (and mirrored environment variables) that control
how it retries IMDb lookups and how aggressively it backs off when hitting
HTTP 429
responses. Use the CLI options during ad-hoc runs or set the
environment variables when deploying via Docker Compose or other orchestrators.
CLI flag | Environment variable | Default | How it influences retries |
---|---|---|---|
--imdb-cache |
IMDB_CACHE |
imdb_cache.json |
Stores successful IMDb responses on disk so repeated runs can skip network requests. Cached hits never enter the retry pipeline. |
--imdb-max-retries |
IMDB_MAX_RETRIES |
3 |
Number of retry attempts when IMDb returns rate-limit responses. The loader makes max_retries + 1 total attempts before deferring an ID to the retry queue. |
--imdb-backoff |
IMDB_BACKOFF |
1.0 |
Initial delay in seconds before the next retry. The loader doubles the delay after each failed attempt (1s → 2s → 4s …), slowing down repeated bursts. |
--imdb-requests-per-window |
IMDB_REQUESTS_PER_WINDOW |
disabled | Enables a throttle that only allows the configured number of IMDb requests within the window defined below. Leave unset (None ) to run without throttling. |
--imdb-window-seconds |
IMDB_WINDOW_SECONDS |
1.0 |
Duration of the sliding window used by the throttle. Combined with the option above it smooths out request spikes when you need to stay under strict quotas. |
--imdb-queue |
IMDB_QUEUE |
imdb_queue.json |
Path where the retry queue is persisted between runs. IDs that continue to fail after all retries are appended here and reloaded first on the next execution. |
The defaults balance steady throughput with respect for IMDb's limits. For most
home libraries the standard settings work without modification. Persist the
retry queue to a custom path whenever you run multiple loader containers or
want the queue to live on shared storage (for example, mounting /data/imdb
in
Docker) so unfinished retries survive container rebuilds or host migrations.
uv run load-data \
--imdb-max-retries 5 \
--imdb-backoff 0.5 \
--imdb-queue /data/imdb/aggressive_queue.json
Setting IMDB_MAX_RETRIES=5
and IMDB_BACKOFF=0.5
halves the initial delay
and allows two additional retry attempts before falling back to the queue. With
IMDB_REQUESTS_PER_WINDOW
left unset, the loader does not throttle outbound
requests, so it will recover faster from short spikes at the cost of being more
likely to trigger hard IMDb rate limits.
export IMDB_MAX_RETRIES=2
export IMDB_BACKOFF=2.0
export IMDB_REQUESTS_PER_WINDOW=10
export IMDB_WINDOW_SECONDS=60
export IMDB_QUEUE=/data/imdb/conservative_queue.json
uv run load-data
These settings reduce the retry count, double the initial delay, and apply a token-bucket throttle that only allows 10 requests per minute. They are useful when sharing an API key across several workers. Persisting the queue to a shared path makes sure pending IDs continue to retry gradually even if the container stops.
Start the FastMCP server over stdio (default):
uv run mcp-server
Expose the server over SSE on port 8000:
uv run mcp-server --transport sse --bind 0.0.0.0 --port 8000 --mount /mcp
Expose the server over streamable HTTP when your MCP client expects a plain streamed response body instead of SSE events. This mode always requires explicit bind, port, and mount values provided on the command line or through environment variables. Example command:
uv run mcp-server --transport streamable-http --bind 0.0.0.0 --port 8800 --mount /mcp
Set MCP_TRANSPORT=streamable-http
along with MCP_BIND
, MCP_PORT
, and
MCP_MOUNT
to configure the same behavior via environment variables. Use SSE
for browser-based connectors or any client that natively supports
Server-Sent Events and wants automatic reconnection. Choose streamable HTTP
for clients that expect a single streaming HTTP response (for example, CLI
tools or proxies that terminate SSE).
Provide --recommend-user <username>
(or set PLEX_RECOMMEND_USER
) when the
server should hide items already watched by a specific Plex account from
recommendations. Pair the flag with
--recommend-history-limit <count>
/PLEX_RECOMMEND_HISTORY_LIMIT
to constrain
how many watch-history entries the server inspects (defaults to 500) so large
libraries avoid excessive Plex API calls.
The runtime also reads MCP_TRANSPORT
, MCP_HOST
, MCP_PORT
, and MCP_MOUNT
environment variables. When set, those values override any conflicting CLI
flags so Docker Compose or other orchestrators can control the exposed MCP
endpoint without editing the container command.
Additional environment variables tune how the server searches and serves results:
CACHE_SIZE
sets the maximum number of distinct query responses cached in memory. Increase it when the deployment handles many simultaneous users or long-running sessions that repeat complex queries.USE_RERANKER
toggles cross-encoder reranking. Set it to0
to disable the reranker entirely when you want lower latency or do not have GPU capacity.PLEX_PLAYER_ALIASES
provides alternate player names so commands stay consistent with custom Plex clients. Supply the value as JSON or as Python tuple syntax to align with the server CLI parser. Alias values may reference Plex display names as well as machine or client identifiers, and the server will resolve the appropriate player in either direction.PLEX_RECOMMEND_USER
names a Plex user whose watch history should be excluded from similarity recommendations. The server caches that user's rating keys and filters them from results so the caller sees only unseen titles.PLEX_RECOMMEND_HISTORY_LIMIT
caps how many watch-history records the server fetches for the configured user when filtering recommendations. Increase it if results still include previously watched items; decrease it to reduce Plex API load.
Examples:
# Disable cross-encoder reranking to prioritize latency.
USE_RERANKER=0 uv run mcp-server
# Expand the in-memory cache to store up to 2,000 results.
CACHE_SIZE=2000 uv run mcp-server
# Map model-friendly player names to Plex devices by display name.
PLEX_PLAYER_ALIASES='{"living_room":"Plex for Roku"}' uv run mcp-server
# Alias keys can also point at Plex machine or client identifiers.
PLEX_PLAYER_ALIASES='{"movie_room":"6B4C9A5E-E333-4DB3-A8E7-49C8F5933EB1"}' \
uv run mcp-server
# Tuple syntax is also accepted for aliases.
PLEX_PLAYER_ALIASES="[(\"living_room\", \"Plex for Roku\")]" uv run mcp-server
Both the loader and server default to BAAI/bge-small-en-v1.5
for dense
embeddings and Qdrant/bm42-all-minilm-l6-v2-attentions
for sparse embeddings.
Override these by setting DENSE_MODEL
/SPARSE_MODEL
environment variables or
using --dense-model
/--sparse-model
CLI options:
uv run load-data --dense-model my-dense --sparse-model my-sparse
uv run mcp-server --dense-model my-dense --sparse-model my-sparse
Cross-encoder reranking defaults to cross-encoder/ms-marco-MiniLM-L-6-v2
.
Set RERANKER_MODEL
or pass --reranker-model
to point at a different model:
uv run mcp-server --reranker-model sentence-transformers/ms-marco-mini
A Dockerfile builds a GPU-enabled image based on
nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04
using uv
for dependency
management. Build and run the loader:
docker build -t mcp-plex .
docker run --rm --gpus all mcp-plex --sample-dir /data
Use --continuous
and --delay
flags with docker run
to keep the loader
running in a loop.
Docker builds now install dependencies using docker/pyproject.deps.toml
, a
manifest that mirrors the application's runtime requirements. Keep the
[project]
metadata—including the version
—identical to the root
pyproject.toml
so uv sync
can reuse the shared uv.lock
without
validation errors. The Dockerfile
copies that manifest and uv.lock
first,
runs uv sync --no-dev --frozen --manifest-path pyproject.deps.toml
, and only
then copies the rest of the source tree. This keeps the heavy dependency layer
cached even when the application version changes.
When cutting a release:
- Update dependencies as needed and refresh
uv.lock
withuv lock
. - Build the image twice to confirm the dependency layer cache hit:
The second build should reuse the
docker build -t mcp-plex:release-test-1 . docker build -t mcp-plex:release-test-2 .
uv sync
layer as long asdocker/pyproject.deps.toml
anduv.lock
are unchanged. - Tag and push the final release image once satisfied with the cache behavior.
The included docker-compose.yml
launches both Qdrant and the MCP server.
- Set
PLEX_URL
,PLEX_TOKEN
, andTMDB_API_KEY
in your environment (or a.env
file). - Start the services:
docker compose up --build
- (Optional) Load sample data into Qdrant:
docker compose run --rm loader load-data --sample-dir sample-data
The server will connect to the qdrant
service at http://qdrant:6333
and
expose an SSE endpoint at http://localhost:8000/mcp
.
Connection settings can be provided via environment variables:
QDRANT_URL
– full URL or SQLite path.QDRANT_HOST
/QDRANT_PORT
– HTTP host and port.QDRANT_GRPC_PORT
– gRPC port.QDRANT_HTTPS
– set to1
to enable HTTPS.QDRANT_PREFER_GRPC
– set to1
to prefer gRPC.
Run linting and tests through uv
:
uv run ruff check .
uv run pytest
Please read AGENTS.md for commit message conventions and PR requirements.
Distributed under the MIT License. See LICENSE for more information.