A .NET 10 CLI for managing vector database collections — create embeddings, query by similarity, and migrate data between vector stores.
Built for teams that need to validate semantic search, RAG pipelines, or AI retrieval workflows before committing to a full product build.
Text / Documents ──► vecctl ──► InMemory or Qdrant
(embed, search,
migrate, verify)
Teams exploring AI-powered search, knowledge retrieval, or RAG workflows face a cold-start problem: you can't evaluate whether semantic search works for your data without first building an embedding pipeline, a vector store integration, a search interface, and a migration path. That's weeks of backend work just to answer one question — does this retrieval approach actually work?
vecctl compresses that validation cycle into minutes. Embed text, search by meaning, migrate from a local prototype to Qdrant, and verify system health — all from the command line, with zero infrastructure required to start.
$ vecctl collection create docs --dimension 1536 --metric Cosine
Collection 'docs' created.
$ vecctl embed docs --text "The quick brown fox jumps over the lazy dog"
Stored record 'a3f2c1d0-...' in 'docs'.
$ vecctl search docs --query "fast animal leaping" --top 3
| ID | Score | Text |
------+--------+---------------------------------------------------
| a3f2c1d0-... | 0.8741 | The quick brown fox jumps over the lazy dog |
$ vecctl collection list
| Name | Dimension | Metric | Vectors |
------+-----------+--------+---------
| docs | 1536 | Cosine | 1 |
graph TD
CLI["Vecctl.Cli<br/>(Commands, Formatters, Program.cs)"]
APP["Vecctl.Application<br/>(CollectionService, EmbeddingService,<br/>SearchService, MigrationService)"]
INFRA["Vecctl.Infrastructure<br/>(QdrantVectorStore, InMemoryVectorStore,<br/>OpenAiEmbeddingProvider, MockEmbeddingProvider,<br/>FileConfigStore)"]
DOM["Vecctl.Domain<br/>(IVectorStore, IEmbeddingProvider, IConfigStore,<br/>Models, Exceptions, Constants)"]
CLI --> APP
CLI --> INFRA
APP --> DOM
INFRA --> DOM
Layer rules: Domain has zero NuGet dependencies — pure C# interfaces, records, and enums. Application depends only on Domain. Infrastructure implements Domain interfaces and owns all external I/O. CLI is the composition root.
Design patterns in use: Command pattern for CLI actions, factory pattern for store/provider selection, adapter pattern for Qdrant and OpenAI integrations, and interface-first design throughout for testability and extensibility.
Full embedding workflow — Embed single text strings or batch-process entire files line by line. Uses OpenAI's text-embedding-3-small for real semantic embeddings, or a deterministic mock provider for zero-cost local testing.
Similarity search — Query collections by meaning with configurable top-k and minimum similarity threshold. Results render as tables or JSON.
Prototype-to-production migration — Move vector collections from InMemory to Qdrant in a single command. Validates the workflow before you commit to persistent infrastructure.
Health verification — Check vector store connectivity and embedding provider status before demos or test runs.
135 unit tests — full suite runs without any external dependencies. No API key, no Qdrant instance required.
Semantic search validation — A team wants to know if AI search works over their help-center content, product docs, or internal knowledge base. vecctl lets them embed real text, run natural-language queries, and evaluate retrieval quality — all before writing application code.
RAG pipeline prototyping — An engineering team building a retrieval-augmented generation system needs to test the ingestion and retrieval stages independently. vecctl handles both with a clean command interface.
Vector store migration testing — Start with InMemory for fast iteration, then migrate to Qdrant when the workflow is proven. The migration command handles the transfer and the status command verifies health on both ends.
Stakeholder proof-of-concept — Demonstrate working semantic search to stakeholders in under five minutes using the mock provider and a sample corpus, then swap in OpenAI for a production-grade demo.
dotnet tool install --global vecctl
vecctl initgit clone https://github.com/viktor.veresh.dev/vecctl.git
cd vecctl
dotnet build
dotnet run --project src/Vecctl.Cli -- initdocker compose up qdrant -d
vecctl init
docker compose run vecctl collection listThe init wizard prompts for store type (InMemory or Qdrant), embedding provider (mock or openai), and model configuration. Config is saved to ~/.vecctl/config.json.
vecctl init # configure
vecctl collection create docs --dimension 1536 --metric Cosine # create collection
vecctl embed docs --text "Password reset emails expire after 15 minutes."
vecctl search docs --query "How long is the reset link valid?" --top 3
vecctl migrate docs docs-qdrant --to Qdrant --target-url http://localhost:6333
vecctl status # verify healthThat flow tells the core story: prototype locally → prove retrieval works → migrate to persistent storage → verify.
| Command | Description |
|---|---|
vecctl init |
Interactive configuration wizard |
vecctl collection create <name> |
Create a collection with dimension and metric |
vecctl collection list |
List all collections |
vecctl collection info <name> |
Inspect collection stats |
vecctl collection delete <name> |
Delete a collection |
vecctl embed <collection> |
Embed text (--text) or file (--file, one line per document) |
vecctl search <collection> |
Similarity search (--query, --top, --threshold) |
vecctl migrate <source> <target> |
Copy vectors between stores (--to Qdrant|InMemory) |
vecctl status |
Health check for store and embedding provider |
Global options: --output table|json and --verbose.
| Component | Options | Notes |
|---|---|---|
| Vector store | InMemory, Qdrant |
InMemory for prototyping, Qdrant for persistence |
| Embedding provider | openai, mock |
Mock generates deterministic vectors — no API key needed |
Adding a custom vector store means implementing IVectorStore (9 methods), adding a value to the VectorStoreType enum, and registering it in VectorStoreFactory. No other changes required — all services consume the interface.
Config file: ~/.vecctl/config.json
| Field | Values | Description |
|---|---|---|
StoreType |
Qdrant, InMemory |
Vector store backend |
StoreConnectionString |
URL | Qdrant base URL (ignored for InMemory) |
EmbeddingProvider |
openai, mock |
Mock requires no API key |
EmbeddingApiKey |
string | OpenAI secret key (ignored for mock) |
EmbeddingModel |
string | OpenAI model name |
DefaultDimension |
integer | Must match the model's output dimension |
dotnet testPassed! — Failed: 0, Passed: 135, Skipped: 0
The entire suite runs against InMemoryVectorStore and MockEmbeddingProvider — no external services, no API keys, no Docker required.
| Status | Item |
|---|---|
| Known | InMemory store uses O(n) linear scan — no ANN indexing |
| Known | Qdrant adapter does not support API key authentication |
| Known | Migration does not preserve source collection settings |
| Known | Batch embedding is sequential — no parallelism |
| Known | Mock embeddings are deterministic but not semantically meaningful |
| Planned | Qdrant API key authentication |
| Planned | Live integration tests against Dockerized Qdrant |
| Planned | Parallel batch embedding |
| Planned | Structured Serilog logging |
| Planned | Richer ingestion sources beyond plain text files |
| Planned | Progress bars for batch operations |
.NET 10 · C# · Qdrant · OpenAI Embeddings API · Docker · xUnit
Copyright 2026 Viktor Veresh
This product is part of the AI Ops Platform project. https://github.com/viktor-veresh-dev
Licensed under the Apache License, Version 2.0.