TrueFoundry integration with Braintrust
In this post, we’ll walk through:
- What Braintrust is and why it matters for LLM teams
- How TrueFoundry AI Gateway exports rich traces via OpenTelemetry (OTEL)
- A step-by-step guide to wiring AI Gateway and Braintrust
Why pair TrueFoundry AI Gateway with Braintrust?
Modern LLM stacks are getting increasingly complex:
- Multiple models (OpenAI, Anthropic, self-hosted, etc.)
- Agents and tools calling each other
- Prompt templates, rerankers, retrievers, business logic in between
TrueFoundry AI Gateway gives you a unified control plane for all your LLM traffic – routing, authentication, rate limiting, cost tracking, caching, and more – across providers and models.
Braintrust is an LLM engineering platform that lets you trace, evaluate, and experiment on top of those calls. It captures detailed traces of your LLM interactions (inputs, outputs, latency, token usage, costs), and layers powerful evaluation and analytics on top.
By exporting OTEL traces from TrueFoundry AI Gateway to Braintrust, you get:
- End-to-end visibility: Every LLM call, across models and providers, in one place
- Rich context: Prompts, responses, metadata, latency, token usage, costs
- Production-grade evaluation: Run experiments and automatic evaluations on real traffic
- Faster iteration cycles: Ship changes with confidence instead of guessing
Quick primer: What is Braintrust?
Braintrust is designed specifically for LLM engineering teams. At a high level, it offers:
- Comprehensive LLM tracing – Detailed traces for each interaction including prompts, outputs, token counts, latency, and cost
- Evaluations & experiments – Run systematic evaluations with custom scorers, compare prompts or models, and measure quality over time
- Real-time analytics – Dashboards for performance, usage patterns, and spend
- Prompt management – Version, test, and manage your prompts before rolling them out
With TrueFoundry sending OTEL traces directly into Braintrust, you can plug your live production traffic straight into these capabilities.
How the integration works
TrueFoundry AI Gateway supports exporting OpenTelemetry traces to external backends via HTTP. Braintrust exposes an OTEL-compatible endpoint that accepts traces from your applications.
So the flow is:
- Client / App → sends LLM requests to TrueFoundry AI Gateway
- AI Gateway → forwards the request to the actual model provider (e.g., OpenAI, Anthropic, local model)
- AI Gateway OTEL Exporter → forwards traces of these calls to Braintrust OTEL endpoint
- Braintrust → ingests traces and makes them available in logs, dashboards, and evaluation tools
You configure this once, and then every LLM request going through TrueFoundry automatically becomes visible inside Braintrust.

Prerequisites
Before you begin, make sure you have:
- TrueFoundry Account
- Sign up and set up AI Gateway.
- You can follow the AI Gateway Quick Start guide in the TrueFoundry docs.
- Braintrust Account
- Create or log into your Braintrust workspace.
- Braintrust API Key
- Generate from your Braintrust account settings.
- Braintrust Project ID
- Create a project in Braintrust and copy the Project ID you want to send traces to.
Keep the API key and Project ID handy – we’ll use them in the OTEL configuration.
Step-by-step: Wiring TrueFoundry AI Gateway to Braintrust
Step 1: Grab your Braintrust credentials
From the Braintrust dashboard:
- Log in to your Braintrust account.
- Open the project you’ll use for traces.
- Copy your API Key from account or workspace settings.
- Find and copy the Project ID from the project configuration (usually via a “Copy Project ID” button).
You’ll use these as headers in the OTEL exporter.
Step 2: Open the OTEL configuration in TrueFoundry
In the TrueFoundry dashboard:
- Navigate to AI Gateway → Controls → OTEL Config.
- Enable the Otel Traces Exporter Configuration toggle.
- Select the HTTP Configuration tab.

This is where we’ll point the exporter to Braintrust.
Step 3: Set the Braintrust OTEL endpoint
In the HTTP OTEL config, fill out:
- Traces endpoint:
https://api.braintrust.dev/otel/v1/traces - Encoding:
Proto
This tells TrueFoundry to send OTEL traces directly to Braintrust’s ingestion endpoint.
💡 Using self-hosted Braintrust?
Replace https://api.braintrust.dev with your own instance URL, e.g.:https://your-braintrust-instance.example.com/otel/v1/traces
Step 4: Add the required HTTP headers
Still in the OTEL HTTP configuration, click “+ Add Headers” and add:
HeaderValueAuthorizationBearer <YOUR_BRAINTRUST_API_KEY>x-bt-parentproject_id:<YOUR_PROJECT_ID>
Replace:
<YOUR_BRAINTRUST_API_KEY>with the API key from Braintrust<YOUR_PROJECT_ID>with the project ID where you want traces to land
The x-bt-parent header tells Braintrust which “parent” object traces belong to. You can also use other prefixes depending on your setup, for example:
project_id:<YOUR_PROJECT_ID>project_name:<YOUR_PROJECT_NAME>experiment_id:<YOUR_EXPERIMENT_ID>
This makes it easy to organize traces by project or experiment.
Step 5: Save and deploy the OTEL configuration
Once the endpoint and headers are set:
- Click Save in the OTEL configuration.
- Ensure your LLM traffic is going through TrueFoundry AI Gateway (e.g., apps point to the Gateway’s URL).
From this point on, TrueFoundry will automatically export LLM traces to Braintrust for every request routed through the gateway.
Step 6: Explore your LLM traces in Braintrust
With traffic flowing, head back to the Braintrust dashboard:
- Open your project in Braintrust.
- Navigate to the Logs section.
- You should start seeing traces from TrueFoundry, including:
- LLM calls –
ChatCompletion,AgentResponse, and other operations - Metrics – latency, token usage, error rates, and cost
- Metadata – models used, routes, custom attributes from TrueFoundry
- Trace trees – hierarchical spans showing how calls and sub-calls relate
- LLM calls –
This becomes your single pane of glass for understanding how your LLM app behaves in production.

Self-hosted Braintrust?
If your organization prefers running Braintrust in your own environment:
- Replace the SaaS endpoint with your self-hosted OTEL URL, such as:
https://your-braintrust-instance.example.com/otel/v1/traces
``` :contentReference[oaicite:12]{index=12}- Keep the same headers (
Authorization,x-bt-parent), just make sure the API key and project IDs match your self-hosted setup.
The integration pattern stays exactly the same.
Built for Speed: ~10ms Latency, Even Under Load
Blazingly fast way to build, track and deploy your models!
- Handles 350+ RPS on just 1 vCPU — no tuning needed
- Production-ready with full enterprise support
TrueFoundry AI Gateway delivers ~3–4 ms latency, handles 350+ RPS on 1 vCPU, scales horizontally with ease, and is production-ready, while LiteLLM suffers from high latency, struggles beyond moderate RPS, lacks built-in scaling, and is best for light or prototype workloads.



.png)





