Cohesix is an open-source high-assurance control-plane operating system built on the formally verified seL4 microkernel, designed to keep the trusted computing base intentionally small while enabling deterministic orchestration of edge GPU systems and auditable MLOps. Cohesix is "infrastructure for AGI".
For host tool usage, interdependencies, and policy/mount details, see HOST_TOOLS.md.
| Role | Capabilities | Namespace |
|——|————–|———–|
| Queen | Hive-wide orchestrator driven by cohsh: spawn/kill workers, bind/mount namespaces, inspect logs, request GPU leases across many worker instances | Full /, /queen, /shard/*/worker/* (canonical), legacy /worker/* when enabled, /log, /gpu/* (when installed), plus /policy + /actions and /audit + /replay when enabled |
| WorkerHeartbeat | Minimal worker that emits heartbeat telemetry and confirms console/attach paths; many instances may run concurrently under the Queen | /proc/boot, /shard/<label>/worker/<id>/telemetry, /log/queen.log (RO); legacy /worker/<id>/telemetry when enabled |
| WorkerGpu | GPU-centric worker that reads ticket/lease state and reports telemetry for host-provided GPU nodes; treated as another worker type under the Queen | WorkerHeartbeat view + /gpu/<id>/* |
Exactly one Queen exists per hive, but many worker instances (across worker-heart, worker-gpu, and future types) can be orchestrated simultaneously. The queen session attached via cohsh is the canonical path for operators and automation to exercise these roles.
/shard/<label>/worker/<id>/telemetry.label is derived from sha256(worker_id)[0..=shard_bits) (top shard_bits of the first byte), formatted as two hex digits.sharding.legacy_worker_alias = true enables legacy /worker/<id>/telemetry aliases that resolve to the canonical shard path./worker/* in mounts or policy rules; coh-rtc rejects them deterministically.secure9p.walk_depth >= 5 (canonical path depth). Example compiler error: sharding.enabled requires secure9p.walk_depth >= 5.secure9p.walk_depth >= 3. Example compiler error: sharding.legacy_worker_alias requires secure9p.walk_depth >= 3.Ticket bound to the role, worker identity (subject), and mount table.attach; NineDoor verifies MAC and initialises session state.Attachments always arrive via NineDoor: queen mounts the full namespace, worker-heartbeat mounts only its telemetry and boot views, and worker-gpu attaches to the /gpu/<id>/ subtrees exposed to its ticket. Ticket values (when present) select the role-specific namespace, and NineDoor aborts attaches on ticket mismatch, timeouts, or unsupported roles, leaving cohsh detached with an explicit error.
Role orchestration is file-oriented: control actions are append-only writes to control files that the queen
drives through cohsh or host tools. There is no ad-hoc RPC path.
Policy gating and approvals
/policy and /actions namespaces appear./policy/rules is the manifest-derived snapshot of gate targets./actions/queue is the approval/denial log. Each approval includes id, target, and decision./queen/ctl), the write is denied with
ERR ECHO reason=policy ... EPERM until a matching approval is queued./audit is enabled.Control files and observability
/queen/schedule/ctl (append-only JSONL commands)./queen/lease/ctl (append-only JSONL commands)./queen/export/ctl (append-only JSONL commands)./policy/ctl (apply/rollback JSONL commands)./proc/schedule/summary, /proc/schedule/queue./proc/lease/summary, /proc/lease/active, /proc/lease/preemptions./policy/preflight/* and /proc/pressure/policy.These paths are manifest-gated and bounded. If a namespace is missing, check the manifest settings and
whether the host-side publishers (for /gpu or /host) are running.
system, control, worker) with budgeted quanta; queen/control tasks reside in higher band.Control flows are file-oriented (e.g., appends to /queen/ctl) instead of the deprecated RPC/virtual-console sketches; cohsh always runs outside the Cohesix instance—QEMU during development and UEFI hardware in deployment—and speaks the NineDoor transport.
Scheduling contexts originate in root-task: initial SCs are held by root, carved out for NineDoor and per-worker threads, and reclaimed on revocation without altering seL4 SC semantics or time accounting.
pub struct Budget {
pub ticks: Option<u32>, // scheduler quanta
pub ops: Option<u32>, // NineDoor operations
pub ttl_s: Option<u32>, // wall-clock lifetime
}
ops budgets per successful request; when depleted it signals root task for revocation.Revoke(ticket_id) to NineDoor.Rerror(Closed) on further operations, and appends revocation reason to /log/queen.log.Cross-refs: see SECURE9P.md for namespace enforcement, USERLAND_AND_CLI.md for attach semantics,
ARCHITECTURE.md for console and control path semantics, and HOST_TOOLS.md for operator-facing workflows.
/shard/<label>/worker/<id>/yield (legacy /worker/<id>/yield when enabled).