Run local compute on a user's Mac. Coordinate everything from one control plane.
HiveCompute keeps transcription work close to the user's hardware while the hosted coordinator handles
dispatch, retries, job tracking, and fleet visibility. The local worker obeys user preferences. The
coordinator decides what should run and when.
The local app and central coordinator stay in one repo because they are one protocol with one control plane.
Local worker app
The worker runs on a Mac, pulls jobs outbound, and respects user preferences before doing any useful work.
Coordinator API
Jobs are queued, leased, monitored, and retried from one hosted service instead of having peers coordinate
directly.
Hosted manager
Fleet and partner operations live behind a manager UI protected with passkeys instead of a shared bootstrap secret.
Flow
Dispatch without losing local control
What the user controls
When work is allowed to run.
Whether AC power is required.
How idle time and thermal checks should behave.
Which model backend the local worker should prefer.
What the coordinator controls
Job intake and queue ordering.
Shard planning, retries, and stale-work reaping.
Partner tracking, billing, and fleet visibility.
Hosted manager access and operational review.
Current posture
The local worker exists now. The public site and passkey manager now live on the same Fly deployment.
This first version gives you a clean public story for users plus a secure manager entry point for operators.
Next steps are tightening worker enrollment and moving machine secrets out of local YAML.