The most common architecture for AI platforms is the monolith. One large application that does everything: chat, code generation, document analysis, project management. It starts simple and ends in a tangled mess of competing concerns, where a bug in the sales module takes down the code editor.
ODIN takes a fundamentally different approach. Instead of one system that does everything, ODIN is six specialized hubs — each with a clear boundary, a defined contract, and a single area of expertise — connected by a shared memory layer called BrainDB and orchestrated by an intent-aware Router.
Why Hubs, Not a Monolith
The monolith problem is well-understood in traditional software engineering: tight coupling makes systems fragile, hard to extend, and impossible to reason about. The same problem applies to AI platforms, but with an additional dimension: AI systems need to maintain context across specialized domains without leaking responsibilities between them.
A legal analysis module should not silently modify sales commitments. A code generation tool should not override governance constraints. Each domain has its own rules, its own risk profile, and its own audit requirements. Mixing them in a single system means the lowest common denominator wins.
Hubs solve this by enforcing boundaries. Each hub has:
- A defined input/output contract: Every hub accepts structured intent and returns structured output with artifacts, memory writes, risk flags, and audit events
- Isolated responsibility: Hubs do not leak responsibilities or silently override each other
- Escalation protocols: If a hub is uncertain, it escalates rather than guesses
The Six Hubs
Academy Hub (Port 3001)
The training and onboarding engine. Role-based learning paths, governance-aware exercises, and progress tracking. Academy does not just teach people how to use AI — it teaches them how to use AI within a governance framework.
Compass Hub (Port 3002)
The decision integrity engine. Captures every significant decision with rationale, alternatives, constraints, and stakeholder approval. Scores decisions on dependency impact and reversibility. Identifies organizational bottlenecks in decision flow.
Assistant Engine — LUNA (Port 3003)
The universal interface. Voice and chat access to the entire Odin ecosystem. LUNA captures intent, classifies it via the Router, and dispatches to the appropriate hub with relevant context. Runs on local infrastructure (Whisper + Ollama) with cloud fallback for complex tasks.
Legal Hub (Port 3004)
Contract analysis, compliance validation, and legal guardrails. The Legal Hub can veto promises made by other hubs — if the Sales Engine generates a claim that violates compliance constraints, Legal flags it before it reaches the prospect.
Sales Engine (Port 3005)
Context-aware sales material generation. Pulls from BrainDB for organizational knowledge, validates through Legal for compliance, and produces auditable artifacts with full provenance tracking.
Coding Hub (Port 3008)
AI-assisted code generation, architecture decision records, and pull request management. Integrated with the Work Order system so every code change traces back to a defined objective.
The Router: Intent Classification
When a user interacts with ODIN — typically through LUNA — their request hits the Router (Port 3010). The Router does not just pattern-match keywords. It classifies intent and determines which hub or combination of hubs should handle the request.
A request like "draft a contract for the Acme proposal" gets routed to the Legal Hub with context from BrainDB about the Acme prospect. A request like "is our authentication middleware following best practices?" gets routed to the Coding Hub with relevant codebase context.
Multi-hub requests are also supported. "Prepare for the Acme meeting tomorrow" might trigger the Sales Engine for meeting materials and the Legal Hub for contract review, with results aggregated and presented through LUNA.
BrainDB: The Shared Memory Layer
BrainDB (Port 3007) is what makes the hub architecture work as a unified system rather than six disconnected tools. It is the organizational memory layer — a governed knowledge store where every hub can read and write context.
Key principles:
- Governed writes: Every write to BrainDB includes rationale (why this was written), ownership (who can change it), and dependencies (what relies on it)
- Namespaced storage: Knowledge is organized into namespaces (
brain/hubs/academy/config,brain/projects/acme/context,brain/decisions/auth-approach) so hubs access only relevant context - Provenance tracking: Every piece of knowledge has a complete history — who wrote it, when, what triggered the write, and what has changed since
This means when the Sales Engine generates a proposal, it draws on the same organizational knowledge that the Legal Hub uses for compliance checks and the Academy Hub uses for training materials. There is one source of truth, not six copies that drift apart.
Hub Rules
The hub architecture is governed by explicit rules:
- Hubs do not leak responsibilities — the Coding Hub does not make legal judgments
- Hubs do not silently override other hubs — if the Sales Engine and Legal Hub disagree, it surfaces as a conflict, not a silent override
- If uncertain, hubs escalate — guessing is not an acceptable failure mode
- Legal Hub can veto Sales promises — compliance trumps convenience
- Sentinel Hub can gate risky dependencies — supply chain security is a first-class concern
Cross-Hub Collaboration
The most powerful interactions happen when hubs collaborate:
- A work order in the Coding Hub triggers Compass to capture the architectural decision and Audit to record the governance trail
- A sales proposal from the Sales Engine gets validated by Legal and enriched with context from BrainDB
- A training completion in Academy informs Compass about team capability for future decision assignments
- An escalation from any hub triggers governance workflows that span multiple hubs with full audit trails
The Result
Six hubs, one brain. Each hub does one thing exceptionally well. BrainDB ensures they share context without sharing responsibility. The Router ensures intent reaches the right hub. And the audit trail ensures nothing happens in the dark.
This is not a modular monolith with artificial boundaries. It is a genuine service architecture where each component can evolve, scale, and be governed independently — while participating in a unified organizational intelligence layer. For a deeper exploration of how these specialized components create compounding intelligence, read the beehive effect: scaling organizational intelligence. And to understand the governance architecture underpinning this system, see AI governance without the bureaucracy.
The full hub ecosystem is available in every Odin deployment. Schedule a demo to see how six specialized hubs work as one system.