Deployment Overview and Topology
This page describes a practical deployment model for running AgentHub as a single service.
Architecture at a Glance
AgentHub deployment is intentionally simple:
- One backend process (Rust)
- One embedded web UI served by that same process
- One SQLite database for persisted state
- Browser clients connected with HTTP + SSE
Optional scale-out deployments can add remote Agent Nodes for execution and mailbox delivery while keeping AgentHub as the main control plane.
Runtime Components
Prepare these components before rollout:
- AgentHub executable (or
cargo runfor development) - A valid
config.toml - Writable runtime home (default under
~/.agenthub/) - Explicit
safe_pathsfor all allowed repositories/workdirs
Deployment Modes
Local development mode
Use this for daily personal work:
cd web
npm install
npm run build
cd ..
cargo run -- -c /path/to/config.toml
Team internal mode
Use this for shared environments:
- Build and run a fixed binary
- Run under a dedicated OS user (not root)
- Manage process lifecycle with a supervisor (for example systemd)
- Keep AgentHub behind an internal reverse proxy or VPN boundary
Distributed node mode
Use this when execution must span multiple machines:
- Keep one AgentHub instance as the main control plane
- Register remote Agent Nodes from the
Agentspage - Use encrypted gRPC between AgentHub and nodes
- Configure a node-specific default worktree root when remote filesystems differ
Distributed node prerequisites
Every participating node still runs the same agenthub binary. The difference
is which node acts as the main control plane and which nodes are registered as
remote execution targets.
Recommended remote-node baseline:
[server]
role = "node" # main | node
node_id = "node-east"
[internal_grpc]
enabled = true
listen = "0.0.0.0:50051"
[internal_grpc.security]
mode = "tls" # tls | mtls | disabled
cert_dir = "~/.agenthub/internal-grpc"
[internal_grpc.auth]
issuer = "agenthub"
audience = "agenthub-internal"
# optional: persisted to cert_dir/auth_secret.txt if omitted
shared_secret = "replace-me-for-production"
[internal_grpc.bootstrap]
# optional: persisted to cert_dir/bootstrap_token.txt if omitted
token = "replace-me-for-bootstrap"
Operational notes:
- The main control-plane instance should keep the default
server.role = "main"(or omitserver.roleentirely) so it continues to serve the public web/UI and API surface. server.role = "node"turns the process into a node-only runtime. In this mode AgentHub serves internal gRPC only and does not boot the public web/UI HTTP surface.server.node_idis required whenserver.role = "node"and must match the node id registered on the main control plane.internal_grpc.enabledmust betrueon the main control plane if you want to create or control remote-target agents, or if operators/scripts will useagenthub actor ...against the authority node.tlsis the default recommended starting point.mtlsis available when you want client-certificate verification as well.- Keep
internal_grpc.auth.shared_secretexplicitly inconfig.tomlwhen local actor CLI commands use the loopback control plane. The server can persist a generated secret undercert_dir/auth_secret.txt, but the CLI client only reads the config file when minting its token. - The node registry stores routing metadata only (
grpc_target,tls_server_name,default_worktree_root). It does not store node bootstrap secrets. - Remote-target agent creation fails fast when internal gRPC peer config is not available, so this should be treated as a deployment precondition rather than a runtime toggle.
Recommended rollout order
- Bring up the main AgentHub control plane with
internal_grpc.enabled = true. - Bring up each remote AgentHub node with the same internal gRPC auth/security
policy plus a unique
server.node_id. - Verify the remote node exposes an
https://internal gRPC endpoint that is reachable from the main control plane. - Log into the main AgentHub UI as root and register the remote node from the
Agentspage. - Set
Default worktree rootif the remote node should derive blankcreate_worktreeworkdirs automatically. - Create a remote-target agent and confirm the agent card shows
node:<id>. - Start the agent and verify output/events are visible from the main control plane.
Recommended Network Shape
- Reverse proxy terminates TLS and forwards to AgentHub
- AgentHub listens on internal/private address where possible
- Users access a single stable URL (for login, UI, and API)
Basic Startup Commands
If you build a release binary:
./agenthub -c /path/to/config.toml
If you run from source:
cargo run -- -c /path/to/config.toml
Post-Deploy Smoke Checklist
- Open AgentHub UI and verify login works.
- Create one agent with a safe test path.
- Start a short task and confirm status reaches a terminal state.
- Refresh browser and verify session history still exists.
- Confirm a path outside
safe_pathsis rejected. - If distributed node mode is enabled, register one remote node and verify a remote-target agent can start and stream output back through the main control plane.