Blazingly-fast, rock-solid, local application development with Kubernetes!
Explore the docs »
Try it yourself
·
Report Bug
·
Request Feature
Table of Contents
Short manual on where and how to start. You can find detailed information here (installation) and here (usage).
We offer platform specific installations:
Linux/MacOS via script/cURL
curl -sSL https://raw.githubusercontent.com/gefyrahq/gefyra/main/install.sh | sh -
MacOS via Homebrew
brew tap gefyrahq/gefyra
brew install gefyra
Windows (Manual)
Download the latest binary for Windows under here.
Working with Docker Desktop? We offer an extension to operate Gefyra through a UI on Docker Desktop.
Make sure Gefyra is installed on your cluster (gefyra up
). Some details of the installation depend on your Kubernetes' platform.
Check out our docs for more details.
Bridge a local container into an existing cluster. For a detailed guide please check out this article.
- Run a local available image with Gefyra:
gefyra run -i <image_name> -N <container_name> -n default
- Create a bridge:
gefyra bridge -N <container_name> -n <k8s_namespace> --target deployment/<k8s_deployment>/<k8s_deployment_container>
Explanation for placeholders:
container_name
the name of the container you created in the previous stepk8s_namespace
the namespace your target workload runs ink8s_deployment
the name of your target deploymentk8s_deployment_container
the name of the container withink8s_deployment
bridge_name
the name for the bridge being created
All available bridge
flags are listed here.
Gefyra gives Kubernetes-("cloud-native")-developers a completely new way of writing and testing their applications.
Gone are the times of custom docker-compose
setups, Vagrants, custom scripts or other scenarios in order to develop (micro-)services
for Kubernetes.
Gefyra offers you to:
- run services locally on a developer machine
- operate feature-branches in a production-like Kubernetes environment with all adjacent services
- write code in the IDE you already love, be fast, be confident
- leverage all the neat development features, such as debugger, code-hot-reloading, overriding environment variables
- run high-level integration tests against all dependent services
- keep peace-of-mind when pushing new code to the integration environment
Gefyra builds on top of the following popular open-source technologies:
Docker is currently used in order to manage the local container-based development setup, including the host, networking and container management procedures.
Wireguard is used to establish the connection tunnel between the two ends. It securely encrypts the UDP-based traffic and allows to create a site-to-site network for Gefyra. That way, the development setup becomes part of the cluster and containers running locally are actually able to reach cluster-based resources, such as databases, other (micro)services and so on.
CoreDNS provides local DNS functionality. It allows resolving resources running within the Kubernetes cluster.
Nginx is used for all kinds of proxying and reverse-proxying traffic, including the interceptions of already running containers in the cluster.
You can easily try Gefyra yourself following this small example.
-
Follow the installation for your preferred platform.
-
Create a local Kubernetes cluster with
k3d
like so:
< v5k3d cluster create mycluster --agents 1 -p 8080:80@agent[0] -p 31820:31820/UDP@agent[0]
>= v5k3d cluster create mycluster --agents 1 -p 8080:80@agent:0 -p 31820:31820/UDP@agent:0
This creates a Kubernetes cluster that binds port 8080 and 31820 to localhost.kubectl
context is immediately set to this cluster. -
Apply some workload, for example from the testing directory of this repo:
kubectl apply -f testing/workloads/hello.yaml
Check out this workload running under: http://hello.127.0.0.1.nip.io:8080/
- Set up Gefyra with
gefyra up
- Run a local Docker image with Gefyra in order to make it part of the cluster.
a) Build your Docker image with a local tag, for example from the testing directory:
cd testing/images/ && docker build -f Dockerfile.local . -t pyserver
b) Execute Gefyra's run command:
gefyra run -i pyserver -N mypyserver -n default
c) Exec into the running container and look around. You will find the container to run within your Kubernetes cluster.
docker exec -it mypyserver bash
wget -O- hello-nginx
will print out the website of the cluster service hello-nginx from within the cluster. - Create a bridge in order to intercept the traffic to the cluster application with the one running locally:
gefyra bridge -N mypyserver -n default --target deployment/hello-nginxdemo/hello-nginx --port 80:8000
Check out the locally running server comes up under: http://hello.127.0.0.1.nip.io:8080/ - List all running bridges:
gefyra list --bridges
- Unbridge the local container and reset the cluster to its original state:
gefyra unbridge -N mypybridge
Check out the initial response from: http://hello.127.0.0.1.nip.io:8080/
- Remove Gefyra's components from the cluster with
gefyra down
- Remove the locally running Kubernetes cluster with
k3d cluster delete mycluster
Checkout Gefyra's CLI or Guides.
"Gefyra" is the Greek word for "Bridge" and fits nicely with Kubernetes' nautical theme.
Distributed under the Apache License 2.0. See LICENSE
for more information.
If you encounter issues, please create a new issue on GitHub or talk to us on the Unikube Slack channel. When reporting a bug please include the following information:
Gefyra version or Git commit that you're running (gefyra version),
description of the bug and logs from the relevant gefyra
command (if applicable),
steps to reproduce the issue, expected behavior.
If you're reporting a security vulnerability, please follow the process for reporting security issues.
Gefyra is based on well-crafted open source software. Special credits go to the teams of
https://www.linuxserver.io/ and https://git.zx2c4.com/wireguard-go/about/. Please
be sure to check out their awesome work.
Gefyra was heavily inspired by the free part of Telepresence2.
Doge is excited about that.