Ga naar hoofdinhoud
OdinLabs
Prijzen
  • Prijzen

Geen creditcard vereist

Gebouwd in Nederland • Altijd gratis

OdinLabs

ODIN is AI die u bezit. Implementeer op uw infrastructuur, structureer organisatiekennis voor directe toegang en versterk de capaciteiten van uw team. Gebouwd door Odin Labs in Nederland.

Product

  • Hoe Het Werkt
  • Toepassingen
  • Prijzen
  • Product

Bedrijf

  • Over Ons
  • Contact
  • Partners
  • Blog

Bronnen

  • Documentatie
  • Integraties
  • Vergelijk Tools
  • Beveiliging

Juridisch

  • Privacybeleid
  • Algemene Voorwaarden
  • Cookiebeleid

© 2026 Odin Labs Projects B.V. Alle rechten voorbehouden.

ODIN (Omni-Domain Intelligence Network) is an intelligence system developed by Odin Labs.

Blog/Beyond Chat: How OdinClaw Turns AI Into Enterprise Infrastructure
EngineeringOdinClawMCP

Beyond Chat: How OdinClaw Turns AI Into Enterprise Infrastructure

OdinClaw is not a chat interface. It is an agentic AI gateway with MCP servers, a dual-harness security architecture, and a REST API for AI orchestration — all OpenAI-compatible.

Mitchell Tieleman
Co-Founder & CTO
|21 maart 2026|6 min read

Most developers encounter AI through a chat box. They type a message, the model responds, and that is the full extent of the integration. This works for demos. It does not work for enterprise software.

Enterprise AI needs to persist memory across sessions. It needs to validate decisions against governance policies. It needs to generate proposals, pitch decks, and compliance reports on demand. It needs security controls that are enforced at the architecture level — not configured and hoped for.

OdinClaw is the infrastructure layer we built to make this possible.

What OdinClaw Is

OdinClaw is an agentic AI gateway, fully compatible with the OpenAI API. The integration path is two lines: change your base_url and your api_key. Your existing application keeps working, and it immediately gains access to web search, code execution, URL reading, file analysis, and image generation as built-in tools.

But the OpenAI compatibility is just the entry point. What OdinClaw actually provides is a set of capabilities that go considerably further than model access.

Four MCP Servers, Ready to Install

The Model Context Protocol (MCP) is how AI clients like Claude Desktop, Cursor, and VS Code receive new capabilities. An MCP server is a package you install that exposes tools the AI can call. We have published four:

Brain MCP (@odinlabs-ai/brain-mcp-server) gives the model persistent memory. It exposes four tools: memory query, memory write, memory search, and namespace list. When you install Brain MCP, your AI can write facts to a governed knowledge store and retrieve them in future sessions. This is the difference between an AI that forgets everything when the conversation ends and one that accumulates organizational context over time.

Governance MCP (@odinlabs-ai/governance-mcp-server) brings enterprise decision controls. Four tools: policy validate, compliance check, audit query, decision log. Before the AI commits to a recommendation, it can verify that recommendation against your governance policies. Every decision gets logged. Every audit query returns structured, queryable results.

Sales MCP (@odinlabs-ai/sales-mcp-server) handles the documents that move deals forward. KYC enrichment, proposal generation, pitch deck creation, and offer generation — all as tool calls. The AI does not just describe what a proposal might contain; it generates the document.

Academy MCP (@odinlabs-ai/academy-mcp-server) enables intelligent learning workflows. Curriculum query, lesson content retrieval, exercise generation, and answer assessment. These tools let you build AI-powered training experiences that understand your actual course content.

All four are published on GitHub Packages under @odinlabs-ai. Install via npm, point your MCP-compatible client at the server, and the tools are available.

Security That Runs Before the Model

Security in most AI integrations is an afterthought. You add a content filter after you realize you need one, configure rate limits after your first abuse incident, and add logging after your first compliance inquiry.

OdinClaw inverts this. Security runs before the model sees any request.

Dual verification means every incoming request passes through two independent harnesses. The gateway checks API keys, rate limits, and request structure. The governance layer validates the request against configured policies. Both must pass.

The classification engine runs 14 detection patterns across 4 severity levels. It evaluates content for sensitive information before it leaves your system — before it reaches the model, before it reaches any external API. The classification happens at the gateway, not as a post-processing step.

The kill switch is exactly what it sounds like. If a specific capability — LLM access, code generation, web fetch — needs to be disabled for a specific API key or target, you disable it. Instantly. The change is logged to the audit chain.

The audit chain uses SHA-256 hashing to create tamper-evident logs of every interaction. This is not a database table that can be quietly edited. The chain structure means any modification to a past record breaks the chain's integrity, making tampering detectable.

This security architecture lives in openclaw-integration/src/security/. It is not a marketing claim — it is running code.

Agent Protocol: Orchestration via REST

The five agentic tools (web search, code execution, URL reader, file upload, image generation) are available to any model automatically. But sometimes you need to invoke a specific capability programmatically, from application code, not from a model conversation.

Agent Protocol exposes five service capabilities at /ap/v1/*:

  • code-review — structured analysis of pull requests and code changes
  • document-generation — proposal, report, and content creation
  • knowledge-query — semantic search across organizational memory
  • training-content — curriculum and lesson material retrieval
  • compliance-check — policy validation against governance rules

The interface is a standard REST POST:

POST /ap/v1/agent/tasks
{
  "capability": "code-review",
  "target": "https://github.com/org/repo/pull/42",
  "options": {
    "model": "claude-sonnet-4-6",
    "include_suggestions": true
  }
}

No SDK required. No special client library. If you can make an HTTP request, you can call the Agent Protocol.

B2B Self-Serve Onboarding

The self-serve path is intentional. We are not interested in enterprise sales cycles that require three calls, a legal review, and a 90-day evaluation period before you can write a line of code.

Sign up at app.claw.odin-labs.ai. Your API key is ready immediately. The free tier includes 100K tokens — enough to build a working integration and understand whether OdinClaw fits your needs. No credit card required.

When you are ready to scale, the upgrade path is straightforward: Starter at €9/month for 1M tokens, Pro at €29/month for 5M tokens, Scale at €99/month for 25M tokens. All plans include all models and all agentic tools. Token overages bill at transparent per-token rates.

What This Enables

The combination of OpenAI compatibility, MCP servers, dual-harness security, and Agent Protocol REST endpoints means you can build things that are not possible with a standard model API:

An application that remembers what it learned in previous sessions, checks its decisions against governance policies, and logs every action to a tamper-evident audit chain. A sales workflow that enriches prospect data, generates a customized pitch deck, and routes the result through a compliance check — all in a single agentic loop. A training system that queries existing curriculum, generates exercises calibrated to the learner's progress, and assesses answers against structured rubrics.

These are not hypothetical use cases. They are what the four MCP servers are built for. The governance and audit capabilities behind OdinClaw are the same ones described in our AI governance framework for enterprises. For how this infrastructure deploys on your own servers, see how to deploy AI on your own infrastructure.

Getting Started

The API base URL is api.claw.odin-labs.ai/v1. Your API key starts with oc_live_. Every major OpenAI SDK works without modification.

For MCP integration, install the relevant server package via npm and configure your client. For Agent Protocol, make HTTP POST requests to /ap/v1/agent/tasks with your API key in the Authorization header.

The documentation is available at app.claw.odin-labs.ai. The free tier starts immediately.


Questions about OdinClaw for your specific use case? Get in touch — we respond to technical inquiries directly.

Tags:OdinClawMCPAI GatewayEnterpriseSecurityAgent Protocol
Written by

Mitchell Tieleman

Co-Founder & CTO

Table of Contents

  • What OdinClaw Is
  • Four MCP Servers, Ready to Install
  • Security That Runs Before the Model
  • Agent Protocol: Orchestration via REST
  • B2B Self-Serve Onboarding
  • What This Enables
  • Getting Started

Share This Article

Gerelateerde Artikelen

Engineering6 min read

Why Your AI Should Live on Your Servers

The convenience of cloud AI comes at a cost most organizations don't fully understand until it's too late. Here's the case for on-premise AI deployment, data sovereignty, and zero cloud dependency.

Mitchell Tieleman
•8 januari 2026
Engineering10 min read

On-Premise AI vs Cloud AI: An Honest Comparison for 2026

The on-premise vs cloud AI debate has moved past ideology. In 2026, the right answer depends on your data sensitivity, scale, and regulatory environment. Here's a practical comparison across every dimension that matters.

Mitchell Tieleman
•27 maart 2026

Klaar Om Te Beginnen?

Ontdek hoe ODIN uw ontwikkelworkflow kan transformeren met autonome AI-agents die daadwerkelijk leveren.