Low-code automation platforms are no longer just for weekend hacks; they are the new engine of business productivity. Leading this charge is n8n, which has exploded in popularity to become one of the top open-source repositories in the world. Its intuitive, node-based canvas empowers teams to build complex AI-powered automations with breathtaking speed.
But this success presents a paradox. As n8n workflows graduate from individual projects to mission-critical business processes, they often create a governance blind spot for enterprise IT, security, and finance teams. How do you control costs when every workflow can call a different model vendor? How do you ensure security when API keys are scattered across dozens of canvases?
This is where the integration with TrueFoundry's AI Gateway becomes essential. It allows your organization to embrace the full creative power of n8n while wrapping every execution in the enterprise-grade controls you need for cost, security, and reliability.
Why This Integration Matters Now: A Real-World Example
Imagine a common use case: automating customer support.
In an n8n workflow, this might look simple. An incoming email is passed to an LLM to draft a reply. The generated text is then sent to a second sentiment-analysis model to flag urgent cases for human review. It’s fast, efficient, and empowers the support team.
But when these model calls are routed directly to vendors, chaos ensues for the platform teams:
- Finance sees multiple invoices from different AI vendors with no clear way to attribute costs or enforce budgets.
- Security loses a centralized audit trail, making compliance reviews for SOC 2 or HIPAA a nightmare.
- Engineering faces a rigid system. Swapping the sentiment model for a more cost-effective or better-performing alternative requires manually updating every n8n workflow that uses it.
By routing all n8n traffic through the TrueFoundry AI Gateway, the entire picture changes. The Gateway acts as a single, intelligent control plane. Support agents keep the drag-and-drop speed they love, while the organization gains complete governance and cost control—closing the blind spots for every single LLM call.
Connecting n8n to the Gateway: A 3-Step Guide
Wiring n8n into the Gateway is a one-time setup that takes minutes.
Prerequisites:
- A personal access token from your TrueFoundry account.
- Your organization's Gateway base URL (e.g., https://<your-org>.truefoundry.cloud/api/llm).
- An active n8n instance (either self-hosted or on n8n Cloud).

Steps:
- Create a Credential in n8n: Navigate to the Credentials section in your n8n instance and click Create Credentials. Search for and select your desired model provider (e.g., OpenAI, Anthropic).

- Configure the Endpoint: In the credential configuration screen, paste your TrueFoundry Gateway URL into the Base URL field and your TrueFoundry access token into the API Key field.
(Note: n8n may show a warning that it can’t list models from the base URL. This is expected and confirms the Gateway is securely intercepting the calls.) - Specify Your Model in the Workflow: Drop a chat node (e.g., OpenAI Chat) into your workflow canvas. In the node settings, set Model selection to By ID and paste the specific model ID from your TrueFoundry dashboard (e.g., openai-main/gpt-4o, groq/gemma-7b-it, etc.).

That’s it. Run your workflow. Every LLM request from that point forward will be routed through the TrueFoundry Gateway, giving you immediate observability and policy enforcement.
The Enterprise Dividend: What Your Teams Gain Instantly
This is just the beginning. Over the next quarter, the TrueFoundry team plans to deepen the integration to further streamline the builder experience. Your feedback is crucial in guiding our roadmap. Feel free to connect with us with your feature requests.
Final Thoughts
Low-code platforms succeed when they remove friction for builders without creating new operational headaches for the organization. By routing n8n through the TrueFoundry AI Gateway, you get the best of both worlds: the drag-and-drop speed your creators demand and the enterprise-grade controls your platform, security, and finance teams already rely on.
Grab your token, point n8n at the Gateway, and start shipping AI-powered workflows—safely, predictably, and at scale.
Blazingly fast way to build, track and deploy your models!
