Agentic Layer
Welcome to the Agentic Layer documentation.
The Challenge
Organizations have successfully set up initial process automations with Generative AI. But now they face a new set of challenges: backend integrations are growing in complexity, long-term maintainability is uncertain, and compliance and security requirements keep increasing.
It is clear that agentic automation — where AI agents act autonomously to execute complex workflows — is the next logical step. But committing deeply to a rapidly evolving and volatile technology landscape carries real risk. How do you move forward without locking yourself into today’s tools and frameworks?
The Solution
The Agentic Layer is an open-source, Kubernetes-native collection of building blocks for orchestrating multi-agent AI systems in a flexible, secure, and maintainable way. It acts as an abstraction layer and compliance guardian between your business applications and the fast-moving world of AI agents, models, and protocols.
It is composed of a set of spacers and controls that form an enterprise-grade platform with interchangeable components, adaptable to individual requirements.
Why the Agentic Layer
Digital Sovereignty & Technology Independence
Stay in control of technical inflation through a best-of-breed integration of open-source building blocks. Every component is interchangeable, and the platform is built on open interfaces and standards — including A2A, MCP, OpenTelemetry, and Kubernetes. This ensures full independence from specific frontends, models, or vendors.
Sustainable Maintainability & Operability
Complex agent logic is encapsulated outside of orchestration tools in a secure and highly integrated operating environment. This separation makes the platform maintainable and operable in the long term — turning the investment into agentic automation into a sustainable one.
Architecture Overview
The following picture visualizes how the Agentic Layer integrates between the frontends and applications on the top, the Kubernetes-based platform at the bottom, and your individual Agentic Workforces and Connectors that run your business.
There are two types of key spacers: The Agent Runtime and the different Gateways. The Agent Runtime takes care of running your agents and connecting them to other agents and tools / connectors. It allows for custom agent implementations and ready-to-use implementation templates, both with the option to use different frameworks, as long as they support the A2A protocol.
The Testbench connects to the Agent Runtime and makes use of Kubernetes to not only run your AI evaluations in the cluster, but also trigger the tests when the infrastructure (like agents and models) change.
The gateways control the traffic from and to the agents. They all are basically classical API gateways, but with a focus on the AI protocols. The Agent Gateway exposes Agents to the outside world, allowing for additional supported APIs, like making an agent compatible to a frontend that only supports OpenAIs Chat Completion or supports growing protocols like AG-UI or A2UI. Making an agent available through MCP is also an interesting solution until specific agent protocols gain more adoption.
The AI Gateway, sometimes referred to as model facade or model router, abstracts the call to the LLMs and allows switching providers quickly or hosting LLM models locally for certain use cases. It also allows for fine-grained control. Like adding guardrails that check for personal data, tracking LLM costs, or centrally log all interactions for compliance and auditing.
The Tool Gateway is the facade towards all the internal and external (MCP) tools. Besides beforementioned aspects, the authentication and authorization become especially important here, as this is where data is read and modified.
The Agentic Layer is designed to switch individual components easily and contributing new implementations independently. For each type of gateways, a lot of solutions, many of them open source, are popping up right now and are waiting for adoption. We highly appreciate contributions to support additional implementations.
Pluggable Implementations
A core design principle of the Agentic Layer is that every gateway and agent runtime is replaceable. Rather than prescribing a single technology, the platform defines Kubernetes Custom Resource Definitions (CRDs) that act as contracts between the platform and the concrete implementations.
Each CRD-backed resource represents a point of extensibility:
-
Agent implementations can be built with any framework — LangChain, LlamaIndex, custom code — as long as they expose an A2A-compatible interface. The Agent Runtime Operator manages their lifecycle uniformly, regardless of the underlying technology.
-
Agent Gateway implementations can be swapped or extended to support additional protocols and APIs. You can use an existing solution or contribute a new one without touching the rest of the platform.
-
AI Gateway implementations can point to any combination of LLM providers — cloud-hosted, self-hosted, or local — making it straightforward to change models or add new providers as the landscape evolves.
-
Tool Gateway implementations give teams the flexibility to integrate with different MCP-compatible tool servers and internal APIs, each with their own authentication and authorization strategy.
This structure ensures that adopting the Agentic Layer is not a commitment to a fixed set of technologies, but rather a commitment to a well-defined operating model that accommodates change.
The documentation is organized using Diátaxis.
