This repo provisions a fresh Apple silicon Mac for Xcode-based server work:
- pinned Xcode installation via
xcodes - iOS runtime download via
xcodebuild -downloadPlatform iOS - local build tooling for simulator-targeted app builds
- optional local simulator creation and app install/launch on GUI-capable hosts
Remote desktop, VNC, noVNC, SSH tunnel, background agents, and fleet orchestration are out of scope.
bin/bootstrap-host: one-time host bootstrap for Homebrew, tool install, optional user creation, and filesystem prepbin/bootstrap-user: one-time user bootstrap for Xcode install, selection, licensing, first launch, and runtime downloadbin/hostctl: local CLI for status, build, and optional simulator commandssrc/macsimworker: local controller, config loader, build logic, and simulator helpersconfig/worker.env.example: configuration template
- Headless build server:
- run
bootstrap-host - run
bootstrap-user - use
hostctl build
- run
- GUI-capable local simulator host:
- same bootstrap flow
- optionally configure simulator settings in
worker.env - use
hostctl recreate-sim,install,launch, andreset
-
Clone this repo to a stable path on the Mac.
-
Copy
config/worker.env.exampletoconfig/worker.envand fill in the values you need. -
If you want the bootstrap to create a dedicated local user, set both
SERVICE_USERandSERVICE_USER_PASSWORD. -
Run the host bootstrap as your normal login user:
./bin/bootstrap-host
-
Run the user bootstrap as the chosen local user:
./bin/bootstrap-user
If
SERVICE_USERis set, this must be the actual Unix user running the command. SettingSERVICE_USER=...in the environment does not switch accounts. Ifxcodesstill fails with aDecodingErrormentioningsalt, you can bypassxcodeslogin entirely by settingXCODE_XIP_PATHto a manually downloaded Xcode.xip, orXCODE_APP_PATHto an already installed Xcode app bundle. -
Use the local CLI as needed:
./bin/hostctl status ./bin/hostctl build ./bin/hostctl recreate-sim ./bin/hostctl install ./bin/hostctl launch ./bin/hostctl reset ./bin/hostctl logs
SERVICE_USER_PASSWORD: optional; only needed ifbootstrap-hostshould create a local userXCODES_USERNAME/XCODES_PASSWORD: optional;xcodescan also prompt interactively
Environment variables override matching keys from config/worker.env, so secrets can stay out of the repo.
The config contract is staged:
bootstrap-hostonly needs machine/bootstrap values such asWORKER_ROOT, optionalSERVICE_USER, and optionalSERVICE_USER_PASSWORD. It will prompt forsudoonly when a machine-level step requires it.- if Apple’s Command Line Tools are older than the target Xcode line,
bootstrap-hostwill try to install the matching CLT update from Software Update before it installsxcodes - if FileVault remains enabled,
bootstrap-hostnow warns and continues. That is fine for local Xcode provisioning, but unattended reboot behavior still requires FileVault to be disabled. bootstrap-useronly needs Xcode install values such asXCODE_VERSIONandXCODES_DIRECTORYhostctl buildrequires build-path values such asREPO_PATH,XCODE_WORKSPACEorXCODE_PROJECT,XCODE_SCHEME, andCONFIGURATIONhostctl recreate-sim,install,launch, andresetrequireIOS_RUNTIME_VERSION,SIM_DEVICE_TYPE, andSIM_NAME
Run the unit tests:
python3 -m unittest discover -s tests -vRun shell syntax checks:
bash -n bin/bootstrap-host bin/bootstrap-user bin/hostctlflowchart TB
U["Developer or external caller"] -->|"POST /api/v1/jobs"| C["Controller API (/api/v1/jobs)"]
C -->|validate + store| DB[(Orchestrator SQLite)]
C -->|returns job_id| U
loop["Scheduler loop"]
loop -->|sync inventory| SP["Scheduler.sync_providers"]
SP --> MS["MacStadiumProvider.sync_nodes"]
SP --> SW["ScalewayProvider.sync_nodes"]
SP --> LP["LocalProvider.sync_nodes"]
MS -->|upsert nodes / provider_instances| DB
SW -->|upsert nodes / provider_instances| DB
LP -->|upsert nodes from config/manual| DB
loop -->|pick READY + matching node| SD["Scheduler.dispatch_job"]
SD -->|create| LC[("Lease")]
SD -->|assign payload + lease_id| NA["Node Agent / Host / macsimworker"]
NA -->|build/sim/interactive run| Job["Worker job execution"]
Job -->|progress / completion| HC["Agent heartbeat + status"]
HC -->|POST /api/v1/leases/{id}/heartbeat| C
C -->|store heartbeat/update| DB
C -->|on terminal status| LC2["Lease complete/fail"]
LC2 -->|release node / scale-down candidate checks| DB
U -->|"GET /api/v1/jobs/{job_id}"| C
C -->|job + lease + node status| U
U -->|GET /api/v1/nodes| C
C -->|pool and provider state| U
The diagram shows:
- Controller API is the source of truth for jobs and lease state.
- Providers are adapters that normalize node inventory into the same node table.
- Nodes remain authoritative for execution, while the scheduler arbitrates placement and lifecycle.
- Lease heartbeat is the mechanism used for liveness and recovery.
Scaleway support is configured with a provider block in config/orchestrator.example.json type scaleway.
Minimal managed-mode fields:
api_base_url(default:https://api.scaleway.com)api_token(orSCW_SECRET_KEY)project_id(orSCW_PROJECT_ID, fallbackSCW_DEFAULT_PROJECT_ID)zone(required for managed mode)instance_prefixcommercial_typeimage_idboot_volume_gbauto_bootstrap_agent(set false if you prefer staticuser_data)bootstrap_template(defaultbin/scaleway-agent-bootstrap.sh)bootstrap_repo_dir,bootstrap_repo_url,bootstrap_repo_refcontroller_urlandagent_secret- autoscaling bounds:
min_nodes,max_nodes,scale_up_step,scale_down_step,scale_threshold_up,scale_threshold_down,scale_cooldown_seconds,scale_idle_grace_seconds
Example bootstrap variables for Scaleway-managed macOS images:
export MAC_NODE_PROVIDER=scaleway
export MAC_NODE_PROVIDER_INSTANCE_ID=<orchestrator-node-id>
export MAC_NODE_ID=<same-orchestrator-node-id>
export MAC_NODE_REGION=fr-par
export MAC_NODE_ZONE=fr-par-1
export MAC_NODE_SECRET=<shared agent secret>
export ORCHESTRATOR_URL=http://<controller-ip>:8080When auto_bootstrap_agent is true, provisioning injects a complete node-agent.generated.json and starts
the agent automatically via the template. The bootstrap script:
- writes generated node config,
- sets required orchestration env values in the worker env file,
- installs a launchd agent (
PLIST_LABEL) so the orchestrator agent starts automatically on boot/restart, and - keeps a background fallback via
nohupif launchd setup is unavailable.
The start command used is:
./bin/mac-orchestrator agent --config <generated-node-config>If you ever need to start manually, use the same PYTHONPATH export to avoid module resolution issues:
export PYTHONPATH=/path/to/orchestrator/src
cd /path/to/orchestrator
./bin/mac-orchestrator agent --config config/node-agent.generated.jsonTo make this work, your Scaleway image must expose port 9001, and your controller URL should be reachable from the instance.
The node registration payload sent by each instance is MAC_NODE_PROVIDER_INSTANCE_ID, MAC_NODE_REGION, and MAC_NODE_ZONE.
Important: controller_url must be reachable from the mac instance itself. A default of 127.0.0.1 is only valid for
on-host node/agent setups.
Important: the controller reads these values from process environment, not only from plain VAR=value lines. If you keep them in config/orchestrator.env, make them exported before launch:
set -a
source config/orchestrator.env
set +a
./bin/mac-orchestrator controller --config config/orchestrator.example.jsonScaleway credentials used by the provider are:
SCW_SECRET_KEY(required).SCW_TOKENis accepted as a compatibility alias.SCW_DEFAULT_PROJECT_ID(orSCW_PROJECT_ID) (required)SCW_DEFAULT_ZONE(optional ifzoneis set in provider config)
SCW_ACCESS_KEY and SCW_API_TOKEN are not used by this provider.
To discover live controller node mapping from the CLI side run:
./bin/orchestrator-discovery --output tableIt prints controller_ip, server_id (the Scaleway instance ID) and hostname for each registered node.
This repository now includes a small CLI app that talks to a Cerebras chat-completions API.
-
Export your API key:
export CEREBRAS_API_KEY=<your_key>
-
Run a one-shot prompt:
python3 -m cerebras_app.cli "Hello from Cerebras" -
Or start interactive mode:
python3 -m cerebras_app.cli --interactive
-
Optional configuration:
--model(defaultllama3.1-8b)--api-base(defaulthttps://api.cerebras.ai)--endpoint(default/v1/chat/completions)--system-prompt--max-tokens--temperature
The entry point is also available as cerebras-chat if installed as a package.
This project is validated with ty, Astral's Python type checker.
Install options:
python3 -m pip install ty
# OR
uv add --dev tyRun type checks:
# project root
ty check src/
# or with uv-managed env
uv run ty checkIf dependencies are in a virtual environment, activate it before running ty check so type discovery works.