Skip to content

t0tl/macorchestrator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mac Xcode Server Bootstrap

This repo provisions a fresh Apple silicon Mac for Xcode-based server work:

  • pinned Xcode installation via xcodes
  • iOS runtime download via xcodebuild -downloadPlatform iOS
  • local build tooling for simulator-targeted app builds
  • optional local simulator creation and app install/launch on GUI-capable hosts

Remote desktop, VNC, noVNC, SSH tunnel, background agents, and fleet orchestration are out of scope.

Layout

  • bin/bootstrap-host: one-time host bootstrap for Homebrew, tool install, optional user creation, and filesystem prep
  • bin/bootstrap-user: one-time user bootstrap for Xcode install, selection, licensing, first launch, and runtime download
  • bin/hostctl: local CLI for status, build, and optional simulator commands
  • src/macsimworker: local controller, config loader, build logic, and simulator helpers
  • config/worker.env.example: configuration template

Supported Modes

  • Headless build server:
    • run bootstrap-host
    • run bootstrap-user
    • use hostctl build
  • GUI-capable local simulator host:
    • same bootstrap flow
    • optionally configure simulator settings in worker.env
    • use hostctl recreate-sim, install, launch, and reset

Bootstrap Flow

  1. Clone this repo to a stable path on the Mac.

  2. Copy config/worker.env.example to config/worker.env and fill in the values you need.

  3. If you want the bootstrap to create a dedicated local user, set both SERVICE_USER and SERVICE_USER_PASSWORD.

  4. Run the host bootstrap as your normal login user:

    ./bin/bootstrap-host
  5. Run the user bootstrap as the chosen local user:

    ./bin/bootstrap-user

    If SERVICE_USER is set, this must be the actual Unix user running the command. Setting SERVICE_USER=... in the environment does not switch accounts. If xcodes still fails with a DecodingError mentioning salt, you can bypass xcodes login entirely by setting XCODE_XIP_PATH to a manually downloaded Xcode .xip, or XCODE_APP_PATH to an already installed Xcode app bundle.

  6. Use the local CLI as needed:

    ./bin/hostctl status
    ./bin/hostctl build
    ./bin/hostctl recreate-sim
    ./bin/hostctl install
    ./bin/hostctl launch
    ./bin/hostctl reset
    ./bin/hostctl logs

Secrets And Non-Repo Inputs

  • SERVICE_USER_PASSWORD: optional; only needed if bootstrap-host should create a local user
  • XCODES_USERNAME / XCODES_PASSWORD: optional; xcodes can also prompt interactively

Environment variables override matching keys from config/worker.env, so secrets can stay out of the repo.

Configuration Model

The config contract is staged:

  • bootstrap-host only needs machine/bootstrap values such as WORKER_ROOT, optional SERVICE_USER, and optional SERVICE_USER_PASSWORD. It will prompt for sudo only when a machine-level step requires it.
  • if Apple’s Command Line Tools are older than the target Xcode line, bootstrap-host will try to install the matching CLT update from Software Update before it installs xcodes
  • if FileVault remains enabled, bootstrap-host now warns and continues. That is fine for local Xcode provisioning, but unattended reboot behavior still requires FileVault to be disabled.
  • bootstrap-user only needs Xcode install values such as XCODE_VERSION and XCODES_DIRECTORY
  • hostctl build requires build-path values such as REPO_PATH, XCODE_WORKSPACE or XCODE_PROJECT, XCODE_SCHEME, and CONFIGURATION
  • hostctl recreate-sim, install, launch, and reset require IOS_RUNTIME_VERSION, SIM_DEVICE_TYPE, and SIM_NAME

Verification

Run the unit tests:

python3 -m unittest discover -s tests -v

Run shell syntax checks:

bash -n bin/bootstrap-host bin/bootstrap-user bin/hostctl

Orchestration flow

flowchart TB
  U["Developer or external caller"] -->|"POST /api/v1/jobs"| C["Controller API (/api/v1/jobs)"]
  C -->|validate + store| DB[(Orchestrator SQLite)]
  C -->|returns job_id| U

  loop["Scheduler loop"]
  loop -->|sync inventory| SP["Scheduler.sync_providers"]
  SP --> MS["MacStadiumProvider.sync_nodes"]
  SP --> SW["ScalewayProvider.sync_nodes"]
  SP --> LP["LocalProvider.sync_nodes"]
  MS -->|upsert nodes / provider_instances| DB
  SW -->|upsert nodes / provider_instances| DB
  LP -->|upsert nodes from config/manual| DB

  loop -->|pick READY + matching node| SD["Scheduler.dispatch_job"]
  SD -->|create| LC[("Lease")]
  SD -->|assign payload + lease_id| NA["Node Agent / Host / macsimworker"]
  NA -->|build/sim/interactive run| Job["Worker job execution"]
  Job -->|progress / completion| HC["Agent heartbeat + status"]
  HC -->|POST /api/v1/leases/{id}/heartbeat| C
  C -->|store heartbeat/update| DB

  C -->|on terminal status| LC2["Lease complete/fail"]
  LC2 -->|release node / scale-down candidate checks| DB

  U -->|"GET /api/v1/jobs/{job_id}"| C
  C -->|job + lease + node status| U
  U -->|GET /api/v1/nodes| C
  C -->|pool and provider state| U
Loading

The diagram shows:

  • Controller API is the source of truth for jobs and lease state.
  • Providers are adapters that normalize node inventory into the same node table.
  • Nodes remain authoritative for execution, while the scheduler arbitrates placement and lifecycle.
  • Lease heartbeat is the mechanism used for liveness and recovery.

Scaleway provider

Scaleway support is configured with a provider block in config/orchestrator.example.json type scaleway.

Minimal managed-mode fields:

  • api_base_url (default: https://api.scaleway.com)
  • api_token (or SCW_SECRET_KEY)
  • project_id (or SCW_PROJECT_ID, fallback SCW_DEFAULT_PROJECT_ID)
  • zone (required for managed mode)
  • instance_prefix
  • commercial_type
  • image_id
  • boot_volume_gb
  • auto_bootstrap_agent (set false if you prefer static user_data)
  • bootstrap_template (default bin/scaleway-agent-bootstrap.sh)
  • bootstrap_repo_dir, bootstrap_repo_url, bootstrap_repo_ref
  • controller_url and agent_secret
  • autoscaling bounds: min_nodes, max_nodes, scale_up_step, scale_down_step, scale_threshold_up, scale_threshold_down, scale_cooldown_seconds, scale_idle_grace_seconds

Example bootstrap variables for Scaleway-managed macOS images:

export MAC_NODE_PROVIDER=scaleway
export MAC_NODE_PROVIDER_INSTANCE_ID=<orchestrator-node-id>
export MAC_NODE_ID=<same-orchestrator-node-id>
export MAC_NODE_REGION=fr-par
export MAC_NODE_ZONE=fr-par-1
export MAC_NODE_SECRET=<shared agent secret>
export ORCHESTRATOR_URL=http://<controller-ip>:8080

When auto_bootstrap_agent is true, provisioning injects a complete node-agent.generated.json and starts the agent automatically via the template. The bootstrap script:

  • writes generated node config,
  • sets required orchestration env values in the worker env file,
  • installs a launchd agent (PLIST_LABEL) so the orchestrator agent starts automatically on boot/restart, and
  • keeps a background fallback via nohup if launchd setup is unavailable.

The start command used is:

./bin/mac-orchestrator agent --config <generated-node-config>

If you ever need to start manually, use the same PYTHONPATH export to avoid module resolution issues:

export PYTHONPATH=/path/to/orchestrator/src
cd /path/to/orchestrator
./bin/mac-orchestrator agent --config config/node-agent.generated.json

To make this work, your Scaleway image must expose port 9001, and your controller URL should be reachable from the instance.

The node registration payload sent by each instance is MAC_NODE_PROVIDER_INSTANCE_ID, MAC_NODE_REGION, and MAC_NODE_ZONE.

Important: controller_url must be reachable from the mac instance itself. A default of 127.0.0.1 is only valid for on-host node/agent setups.

Important: the controller reads these values from process environment, not only from plain VAR=value lines. If you keep them in config/orchestrator.env, make them exported before launch:

set -a
source config/orchestrator.env
set +a
./bin/mac-orchestrator controller --config config/orchestrator.example.json

Scaleway credentials used by the provider are:

  • SCW_SECRET_KEY (required). SCW_TOKEN is accepted as a compatibility alias.
  • SCW_DEFAULT_PROJECT_ID (or SCW_PROJECT_ID) (required)
  • SCW_DEFAULT_ZONE (optional if zone is set in provider config)

SCW_ACCESS_KEY and SCW_API_TOKEN are not used by this provider.

To discover live controller node mapping from the CLI side run:

./bin/orchestrator-discovery --output table

It prints controller_ip, server_id (the Scaleway instance ID) and hostname for each registered node.

Cerebras API chat app

This repository now includes a small CLI app that talks to a Cerebras chat-completions API.

Setup

  1. Export your API key:

    export CEREBRAS_API_KEY=<your_key>
  2. Run a one-shot prompt:

    python3 -m cerebras_app.cli "Hello from Cerebras"
  3. Or start interactive mode:

    python3 -m cerebras_app.cli --interactive
  4. Optional configuration:

    • --model (default llama3.1-8b)
    • --api-base (default https://api.cerebras.ai)
    • --endpoint (default /v1/chat/completions)
    • --system-prompt
    • --max-tokens
    • --temperature

The entry point is also available as cerebras-chat if installed as a package.

Type checking

This project is validated with ty, Astral's Python type checker.

Install options:

python3 -m pip install ty
# OR
uv add --dev ty

Run type checks:

# project root
ty check src/
# or with uv-managed env
uv run ty check

If dependencies are in a virtual environment, activate it before running ty check so type discovery works.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors