ocstack is an experimental AI-powered assistant designed to simplify and automate common tasks in OpenStack environments running on OpenShift. This tool serves as a proof of concept (PoC) to explore the capabilities of local large language models (LLMs) through the Ollama framework, using intelligent agents to interact with complex cloud-native platforms.
Managing OpenStack on Kubernetes (specifically OpenShift) often involves a steep learning curve, complex CLI interactions, and context switching across multiple tools. ocstack leverages LLM-powered agents to:
- Provide conversational interaction with your OpenStack environment
- Guide users through operational tasks (e.g., deployment checks, status reports)
- Reduce cognitive load for new users or operators
- Enable experimentation with local LLMs in secure, air-gapped environments
By integrating LLM agents into your cloud workflows, ocstack demonstrates the potential of intelligent automation in infrastructure management—where agents can understand context, make suggestions, and execute commands with minimal human input.
Unlike traditional scripts or hardcoded automation, agents can:
- Interpret natural language and convert it into actionable commands
- Adapt to varying environments and scenarios
- Offer explanations or troubleshooting steps when something goes wrong
In this PoC, agents act as a bridge between human operators and the OpenStack/OpenShift ecosystem—empowering users to get help, ask questions, or automate repetitive tasks in real-time.
Demo: https://asciinema.org/a/722296
Once set up, you can interact with your OpenStack deployment using natural language prompts. Ask for node status, deployment logs, or even guidance on common workflows—all through a conversational interface.
Note: This project is in active development and subject to significant changes as new capabilities are tested and added.
Install the necessary tools and prepare your OpenShift (CRC) environment:
cd ~
$ git clone https://github.com/openstack-k8s-operators/install_yamls.git
$ cd install_yamls/devsetup
$ PULL_SECRET=~/pull-secret CPUS=6 MEMORY=20480 make download_tools crcFollow the install_yamls documentation to deploy an OpenStack environment on OpenShift.
You can now run ocstack using a local Ollama server for offline LLM support, instead of relying on remote APIs:
curl -fsSL https://ollama.com/install.sh | shollama pull qwen3:latestollama serve &Alternatively, depending on the environment, use systemd:
systemctl start ollamaand watch it using journalctl.
This starts a local REST API compatible with OpenAI’s v1/chat/completions at
http://127.0.0.1:11434.
Once the openstack on openshift environment is ready, and OLLAMA serves a model,
start the ocstack assistant:
$ export KUBECONFIG=$HOME/.crc/machines/crc/kubeconfig; make build && make runNote: You can point to any OpenShift environment by updating the
KUBECONFIGpath to your desired cluster configuration.
OCStack currently supports only local tool execution. Support for connecting to MCP (Model Context Protocol) endpoints is planned for a future release.
The tool system is designed to be extensible, allowing you to create specialized tools for specific tasks. Currently, a basic set of tools is provided, but you can easily add custom tools to enhance functionality.
To add a new tool, follow these three steps:
Create a JSON definition file in the tools/local directory. Use descriptive
names that reflect the tool's purpose.
Example: For OpenStack-specific tools, create tools/local/openstack.json:
[
{
"type": "function",
"function": {
"name": "get_endpoint_list",
"description": "Get the OpenStack endpoint list",
"parameters": {
"type": "object",
"properties": {
"namespace": {
"type": "string",
"description": "The namespace where the OpenStack client is deployed"
}
},
"required": ["namespace"]
}
}
}
]Tool definitions from custom files are automatically merged with the default
tools.json, making them available to the LLM.
Add your tool's implementation to the tools/utils.go module. This is where
you define the actual functions that will be executed when the LLM calls your
tool.
Update the GenerateChat function to handle calls to your new tool. This
connects the LLM's tool invocation to your implementation.
- Default tools: Defined in the base
tools.jsonfile - Custom tools: Organized by category in
tools/local/(e.g.,openstack.json,kubernetes.json) - Implementation: All tool logic resides in
tools/utils.go - Integration: Tool handlers are registered in the
GenerateChatfunction
ocstack includes basic support for models served via ramalama.ai which provides a local runtime for LLMs using LLama.cpp-compatible APIs. This allows the assistant to run fully offline and self-hosted, which is ideal for development or to conduct experiments.
Note: While chat interaction works, function/tool calling is not yet supported with the
LLAMACPPprovider in ocstack.
Install and start a model using the istructions provided by ramalama.ai:
ramalama serve llama3This will expose an API compatible with OpenAI's v1/chat/completions,
typically on http://localhost:8080.
Set the environment variable so ocstack can locate your local ramalama
server:
export LLAMA_HOST=http://localhost:8080In your main.go, update the provider selection to use LLAMACPP:
- client, err := llm.GetProvider(llm.OLLAMAPROVIDER)
+ client, err := llm.GetProvider(llm.LLAMACPP)Note: This is still required because there's no cli yet as this is a very simple POC
$ export KUBECONFIG=$HOME/.crc/machines/crc/kubeconfig; make build && make runOCStack provides convenient Makefile targets for building, running, and managing the MCP server:
make build- Build the ocstack binarymake run- Run ocstack (requires build first)make clean- Clean build artifactsmake test- Run testsmake fmt- Format Go codemake lint- Run linters (requires golangci-lint)
make mcp-server- Start the OpenStack MCP server (includes dependency installation)make mcp-server-deps- Install MCP server dependencies onlymake mcp-server-stop- Stop the running MCP server
# Start MCP server in one terminal
make mcp-server
# In another terminal, build and run ocstack
export KUBECONFIG=$(pwd)/kubeconfig
make build && make run
# Connect to MCP and start using tools
Q :> /mcp connect http http://localhost:8080/mcp
Q :> What is the deployed OpenStack version in the 'openstack' namespace?OCStack supports both local tools and MCP tools with a hybrid approach where MCP tools take priority when available.
OCStack includes a complete OpenStack MCP server example in examples/openstack-mcp-server/.
# Start the MCP server (requires Python 3.9+)
make mcp-server
# In another terminal, start ocstack
export KUBECONFIG=$(pwd)/kubeconfig # Set your OpenShift config
make build && make run
# Connect to the MCP server
Q :> /mcp connect http http://localhost:8080/mcp
# List available tools
Q :> /mcp tools
# Use OpenStack tools via MCP
Q :> What is the deployed OpenStack version?
Q :> Check the status of Nova serviceIf you prefer manual setup:
# Navigate to the MCP server directory
cd examples/openstack-mcp-server
# Create virtual environment and install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Start the server
python server.pyThe MCP server provides these OpenStack management tools:
| Tool | Description | Parameters |
|---|---|---|
hello |
Test function | name (string) |
oc |
Run OpenShift CLI commands | command (string) |
get_openstack_control_plane |
Get control plane status | namespace (optional) |
check_openstack_svc |
Check service status | service (required), namespace (optional) |
needs_minor_update |
Check if update needed | namespace (optional) |
get_deployed_version |
Get current version | namespace (optional) |
get_available_version |
Get available version | namespace (optional) |
Note: The local tools include one additional tool (trigger_minor_update) that's only available locally.
/mcp connect http http://localhost:8080/mcp- Connect to HTTP MCP server/mcp disconnect- Disconnect and fall back to local tools/mcp tools- List all available tools (MCP + local)
The MCP server can be configured via environment variables:
export KUBECONFIG=/path/to/your/kubeconfig # Required for OpenStack tools
export MCP_HOST=localhost # Server host (default: localhost)
export MCP_PORT=8080 # Server port (default: 8080)
export DEFAULT_NAMESPACE=openstack # Default 'openstack' namespace- "client not connected": Ensure the MCP server is running (
make mcp-server) - Connection timeouts: Check if the server is accessible at
http://localhost:8080/health - Tool execution hangs: Verify KUBECONFIG is set and OpenShift cluster is accessible
- Tool timeouts: Check network connectivity to OpenShift cluster
- Permission errors: Verify your OpenShift user has proper RBAC permissions