Join the AI Security Webinar with Palo Alto. Register here

No items found.

TrueFoundry integration with Braintrust

December 11, 2025
|
9:30
min read
SHARE

In this post, we’ll walk through:

  • What Braintrust is and why it matters for LLM teams
  • How TrueFoundry AI Gateway exports rich traces via OpenTelemetry (OTEL)
  • A step-by-step guide to wiring AI Gateway and Braintrust

Why pair TrueFoundry AI Gateway with Braintrust?

Modern LLM stacks are getting increasingly complex:

  • Multiple models (OpenAI, Anthropic, self-hosted, etc.)
  • Agents and tools calling each other
  • Prompt templates, rerankers, retrievers, business logic in between

TrueFoundry AI Gateway gives you a unified control plane for all your LLM traffic – routing, authentication, rate limiting, cost tracking, caching, and more – across providers and models.

Braintrust is an LLM engineering platform that lets you trace, evaluate, and experiment on top of those calls. It captures detailed traces of your LLM interactions (inputs, outputs, latency, token usage, costs), and layers powerful evaluation and analytics on top.

By exporting OTEL traces from TrueFoundry AI Gateway to Braintrust, you get:

  • End-to-end visibility: Every LLM call, across models and providers, in one place
  • Rich context: Prompts, responses, metadata, latency, token usage, costs
  • Production-grade evaluation: Run experiments and automatic evaluations on real traffic
  • Faster iteration cycles: Ship changes with confidence instead of guessing

Quick primer: What is Braintrust?

Braintrust is designed specifically for LLM engineering teams. At a high level, it offers:

  • Comprehensive LLM tracing – Detailed traces for each interaction including prompts, outputs, token counts, latency, and cost
  • Evaluations & experiments – Run systematic evaluations with custom scorers, compare prompts or models, and measure quality over time
  • Real-time analytics – Dashboards for performance, usage patterns, and spend
  • Prompt management – Version, test, and manage your prompts before rolling them out

With TrueFoundry sending OTEL traces directly into Braintrust, you can plug your live production traffic straight into these capabilities.

How the integration works

TrueFoundry AI Gateway supports exporting OpenTelemetry traces to external backends via HTTP. Braintrust exposes an OTEL-compatible endpoint that accepts traces from your applications.

So the flow is:

  1. Client / App → sends LLM requests to TrueFoundry AI Gateway
  2. AI Gateway → forwards the request to the actual model provider (e.g., OpenAI, Anthropic, local model)
  3. AI Gateway OTEL Exporter → forwards traces of these calls to Braintrust OTEL endpoint
  4. Braintrust → ingests traces and makes them available in logs, dashboards, and evaluation tools

You configure this once, and then every LLM request going through TrueFoundry automatically becomes visible inside Braintrust.

Prerequisites

Before you begin, make sure you have:

  1. TrueFoundry Account
    • Sign up and set up AI Gateway.
    • You can follow the AI Gateway Quick Start guide in the TrueFoundry docs.
  2. Braintrust Account
    • Create or log into your Braintrust workspace.
  3. Braintrust API Key
    • Generate from your Braintrust account settings.
  4. Braintrust Project ID
    • Create a project in Braintrust and copy the Project ID you want to send traces to.

Keep the API key and Project ID handy – we’ll use them in the OTEL configuration.

Step-by-step: Wiring TrueFoundry AI Gateway to Braintrust

Step 1: Grab your Braintrust credentials

From the Braintrust dashboard:

  1. Log in to your Braintrust account.
  2. Open the project you’ll use for traces.
  3. Copy your API Key from account or workspace settings.
  4. Find and copy the Project ID from the project configuration (usually via a “Copy Project ID” button).

You’ll use these as headers in the OTEL exporter.

Step 2: Open the OTEL configuration in TrueFoundry

In the TrueFoundry dashboard:

  1. Navigate to AI Gateway → Controls → OTEL Config.
  2. Enable the Otel Traces Exporter Configuration toggle.
  3. Select the HTTP Configuration tab.

This is where we’ll point the exporter to Braintrust.

Step 3: Set the Braintrust OTEL endpoint

In the HTTP OTEL config, fill out:

  • Traces endpoint: https://api.braintrust.dev/otel/v1/traces
  • Encoding: Proto

This tells TrueFoundry to send OTEL traces directly to Braintrust’s ingestion endpoint.

💡 Using self-hosted Braintrust?
Replace https://api.braintrust.dev with your own instance URL, e.g.:
https://your-braintrust-instance.example.com/otel/v1/traces

Step 4: Add the required HTTP headers

Still in the OTEL HTTP configuration, click “+ Add Headers” and add:

HeaderValueAuthorizationBearer <YOUR_BRAINTRUST_API_KEY>x-bt-parentproject_id:<YOUR_PROJECT_ID>

Replace:

  • <YOUR_BRAINTRUST_API_KEY> with the API key from Braintrust
  • <YOUR_PROJECT_ID> with the project ID where you want traces to land

The x-bt-parent header tells Braintrust which “parent” object traces belong to. You can also use other prefixes depending on your setup, for example:

  • project_id:<YOUR_PROJECT_ID>
  • project_name:<YOUR_PROJECT_NAME>
  • experiment_id:<YOUR_EXPERIMENT_ID>

This makes it easy to organize traces by project or experiment.

Step 5: Save and deploy the OTEL configuration

Once the endpoint and headers are set:

  1. Click Save in the OTEL configuration.
  2. Ensure your LLM traffic is going through TrueFoundry AI Gateway (e.g., apps point to the Gateway’s URL).

From this point on, TrueFoundry will automatically export LLM traces to Braintrust for every request routed through the gateway.

Step 6: Explore your LLM traces in Braintrust

With traffic flowing, head back to the Braintrust dashboard:

  1. Open your project in Braintrust.
  2. Navigate to the Logs section.
  3. You should start seeing traces from TrueFoundry, including:
    • LLM calls – ChatCompletion, AgentResponse, and other operations
    • Metrics – latency, token usage, error rates, and cost
    • Metadata – models used, routes, custom attributes from TrueFoundry
    • Trace trees – hierarchical spans showing how calls and sub-calls relate

This becomes your single pane of glass for understanding how your LLM app behaves in production.

Self-hosted Braintrust?

If your organization prefers running Braintrust in your own environment:

  • Replace the SaaS endpoint with your self-hosted OTEL URL, such as:
  • https://your-braintrust-instance.example.com/otel/v1/traces
    ``` :contentReference[oaicite:12]{index=12}  
  • Keep the same headers (Authorization, x-bt-parent), just make sure the API key and project IDs match your self-hosted setup.

The integration pattern stays exactly the same.

The fastest way to build, govern and scale your AI

Discover More

No items found.
December 11, 2025
|
5 min read

TrueFoundry integration with Braintrust

No items found.
December 11, 2025
|
5 min read

Unifying the Agentic Stack: The Gateway That Makes Multi-Agent Systems Truly Work

Engineering and Product
LLMs & GenAI
December 11, 2025
|
5 min read

EU AI Act Compliance: Building AI Governance with Gateways & Platforms

No items found.
December 10, 2025
|
5 min read

Coralogix integration with TrueFoundry AI Gateway

No items found.
No items found.

The Complete Guide to AI Gateways and MCP Servers

Simplify orchestration, enforce RBAC, and operationalize agentic AI with battle-tested patterns from TrueFoundry.
Take a quick product tour
Start Product Tour
Product Tour