Join the AI Security Webinar with Palo Alto. Register here

No items found.

Pangea Integration with TrueFoundry's AI Gateway

October 9, 2025
|
9:30
min read
SHARE

Modern LLM teams move fast but they also need real, practical AI security. We have integrated with many guardrail providers and we bring another integration for our enterprise clients- Pangea integration to the TrueFoundry AI gateway, so that teams can detect prompt-injection, redact sensitive data, and enforce content policies without rewiring their stack.

What is Pangea (and why pair it with an AI Gateway)?

Pangea provides a suite of programmable security services tailored for AI workloads—most notably AI Guard for detecting risky content and enforcing policies, and Redact for automatically removing sensitive data. It introduces the idea of recipes: reusable guard configurations you define in the Pangea console and call from your app or platform. Bringing Pangea into your AI gateway means you can apply these safeguards to every request and response across providers, models, tools, and agents without touching application code paths.

Why security matters for an AI Gateway

  • Centralized defenses. Enforce org-wide guardrails at the AI gateway—not app by app.
  • Data stays home. Traffic flows through your controlled environment; you decide what’s logged and where.
  • Defense-in-depth. Detect injections, defang URLs, block exfiltration attempts, and redact PII before it reaches models or users—clean inputs, clean outputs.  
  • Operational simplicity. One place to wire credentials, one policy surface to manage—less drift, more consistent AI security.  

How the integration works

At a high level:

  1. Create an AI Guard recipe in Pangea (e.g., block prompt-injection, sanitize URLs, redact patterns).  
  2. In TrueFoundry, add a Pangea guard to your route or organization policy—point it to your Pangea domain, project domain, and recipe ID; reference a stored API key.  
  3. The AI gateway calls Pangea inline for prompts and/or completions, then enforces the decision (allow, block, redact, transform) before forwarding to the model or client.  

Supported guard types

You can attach Pangea checks to any of these phases:

  • Prompt (pre-model)
  • Completion (post-model)
  • Prompt & Completion (both directions)

They’re configured as “guards” in the gateway, with Pangea as the provider.  

Adding Pangea Integration

  • Name: Enter a name for your guardrails group.
  • Collaborators: Add collaborators who will have access to this group.
  • Pangea Config:
    • Name: Enter a name for the Pangea configuration.
    • Domain: Domain of the cloud provider and region where your Pangea project is configured. Example: if endpoint is https://<service_name>.aws.us-west-2.pangea.cloud/v1/text/guard, the input should be: aws.us-west-2.pangea.cloud
    • Recipe: (Optional) Recipe key of a configuration of data types and settings defined in the Pangea User Console. It specifies the rules that are to be applied to the text, such as defang malicious URLs.
  • Overrides: (Optional) Enable this option to apply custom overrides for the Pangea account configuration.
  • Guard Type: Select the type of guard you want to apply from the dropdown menu.
  • Pangea Authentication Data:
    • API Key: The API key for Pangea authentication.
      This key is required to authenticate requests to Pangea services. You can obtain it from the Pangea Console by navigating to your project dashboard and selecting “Tokens” or “API Keys” section. Ensure you keep this key secure, as it grants access to your Pangea security services.

What enforcement looks like

  • Block: request/response is stopped with a clear reason and code path for observability.
  • Redact: sensitive spans are removed before forwarding to the LLM or client (using Redact).  
  • Transform: unsafe constructs can be defanged (e.g., URLs), then safely passed along via the AI gateway.  

All decisions are visible in your gateway logs; Pangea also maintains an audit trail within your project for investigations and reviews.  

Frequently asked

Does this add latency?
The call happens at the AI gateway; with caching and concise recipes, the overhead is typically small relative to model latency.

Is model choice constrained?
No. Policies apply across providers and models since they’re enforced at the AI gateway boundary.

Can we combine with other guardrails?
Yes, stack Pangea with additional gateway guards for layered AI security.  

Get started

  • Follow the step-by-step TrueFoundry docs for Pangea configuration. Link here
  • Review Pangea’s AI Guard concepts (recipes, actions) to design the right policy.  

If you’re scaling LLM workloads, this pairing gives you a clean, centralized control point: AI security that travels with every call, and an AI gateway that keeps your apps fast, consistent, and compliant.

The fastest way to build, govern and scale your AI

Discover More

No items found.
October 10, 2025
|
5 min read

Pangea Integration with TrueFoundry's AI Gateway

No items found.
October 10, 2025
|
5 min read

Patronus Integration with TrueFoundry's AI Gateway

No items found.
October 10, 2025
|
5 min read

TrueFoundry's Logging Architecture for AI Gateway

Engineering and Product
October 8, 2025
|
5 min read

What Is LLM Proxy?

No items found.
No items found.

The Complete Guide to AI Gateways and MCP Servers

Simplify orchestration, enforce RBAC, and operationalize agentic AI with battle-tested patterns from TrueFoundry.
Take a quick product tour
Start Product Tour
Product Tour