Skip to content

hausops/mono

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

79 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mono

Monorepo for HausOps

Local Dev Environment

We use Dapr to manage service-to-service communication. To run multiple HausOps services locally and allow them to communicate with each other, you have to run them through Dapr on Podman or Docker.

This guide assumes using Podman, but Docker can be used instead. If you choose to use Docker, make sure to adjust the commands accordingly.

Gettings started

  1. Install dapr-cli.

  2. Install Podman. For example, on MacOS, you can install it with brew install podman.

  3. Start Podman:

    podman machine init
    podman machine start
  4. Initialize dapr components:

    dapr init --container-runtime podman
  5. Run all services locally as configured in hausops/mono/dapr.yaml using Dapr Multi-App Run:

    # cd hausops/mono
    dapr run -f .
    
    # to see services running via dapr
    dapr list
  6. To work on a service (temporary measure; let's figure out something better):

    • Comment it out from dapr.yaml.
    • Running the service using dapr run:
    # Example
    
    # HTTP
    # cd apps/dashboard-api
    dapr run --app-id dashboard-api -- make run
    
    # gRPC
    # cd services/user-svc
    dapr run --app-id user-svc --app-protocol grpc -- make dev

    This way, you don't have to start/stop all services running via Multi-App Run when you need to start/stop the service under development.

Clean up

To clean up your local environment:

# This command is the reverse of `dapr init --container-runtime podman`.
# It will remove Dapr containers from Podman as well.
dapr uninstall --container-runtime podman --all

podman machine stop
podman machine rm

Tracing

Tracing is available out of the box with Dapr. Go to http://localhost:9411 to see traces via Zipkin.

Architecture

The local development environment uses Dapr to manage service-to-service communication. Services communicate with their local Dapr sidecars, which handle service discovery, network resilience, observability, access lists, and other service-mesh functionality. The sidecars run as processes directly on the host machine and communicate over the host machine's network, while the Dapr sidecars communicate with the infrastructure components in containers through Podman's published ports of the containers.

The services and their sidecars run as processes directly on the host machine, so they communicate over the host machine's network. Similarly, the Dapr sidecars communicate with the infrastructure components in containers from the host through Podman's published ports of the containers.

When a service runs, its Dapr sidecar binds to a randomized high port generated by Dapr, eliminating the need to manage the port. The port information is available through the DAPR_GRPC_PORT environment variable, which the service uses to communicate with other services. All HausOps service invocations are done through gRPC.

Each service should be started with the --app-port option. The app port can be any available port, as services communicate with each other through Dapr, so it doesn't actually use the app port. An environment variable APP_PORT is available to the service without the need to manually keep the app's port and the --app-port in sync. If the --app-port option is not specified, APP_PORT will be undefined.

This architecture provides service discovery and portability, allowing services to run consistently across environments, from local development to production. The primary value is consistency, ensuring that differences are isolated in the infrastructure layer rather than leaking into the applications.

Design notes

We chose to run services as processes directly on the local machine rather than in a local Kubernetes cluster for two primary reasons:

  1. Developing services in a local Kubernetes cluster requires a solution like Okteto for syncing files from the local machine to a pod in the cluster, adding complexity to the setup. This design keeps it similar to running an individual localhost:[port] directly on the machine.
  2. Running services in a local Kubernetes cluster is more resource-intensive (CPU -> battery) even with a lightweight solution like Rancher Desktop (using k3s). In our experiments, this added about 20-30% CPU (1-core) load to the baseline on an Apple M1 CPU.

However, this approach has some downsides:

  1. Reproducibility: the services rely on the setup of the host machine's environment, which could differ across developers.
  2. Services are accessible via their APP_PORT and are all exposed directly on the host machine's network.

One possible alternative to consider would be to run the services as containers in a private Podman network. However, this approach would add complexity around the dev container image to use (which might not be a bad thing), a different network topology, and volume mounts from the host machine to the container to manage.

Releases

No releases published

Packages

No packages published

Languages