Skip to main content
This guide provides instructions for integrating GitHub Copilot with the TrueFoundry AI Gateway.

What is GitHub Copilot?

GitHub Copilot is an AI-powered coding assistant that integrates directly into your editor. It provides intelligent code completions, chat-based coding help, and agent-driven workflows across VS Code, JetBrains IDEs, Visual Studio, Eclipse, and Xcode.

Key Features

  1. Inline Code Suggestions: Context-aware code completions and multi-line suggestions as you type
  2. Copilot Chat: Conversational AI assistant for code explanation, generation, debugging, and refactoring
  3. Agent Mode: Agentic workflows that can autonomously plan, edit files, and run terminal commands to complete complex tasks
  4. Bring Your Own Key (BYOK): Use custom models from any OpenAI-compatible provider — including TrueFoundry AI Gateway

Prerequisites

Before integrating GitHub Copilot with TrueFoundry, ensure you have:
  1. TrueFoundry Account: Create a TrueFoundry account with at least one model provider configured and generate a Personal Access Token by following the instructions in Generating Tokens. For a quick setup guide, see our Gateway Quick Start
  2. GitHub Copilot Subscription: An active GitHub Copilot plan (Free, Pro, Business, or Enterprise)
  3. VS Code: Visual Studio Code with the GitHub Copilot extension installed
The BYOK feature for custom OpenAI-compatible models is currently available natively in VS Code Insiders. For stable VS Code, you can use the community extension OAI Compatible Provider for Copilot. See the Alternative: Stable VS Code section below.

Integration Guide (VS Code Insiders)

1. Get Configuration Details

Get the base URL and model name from your TrueFoundry AI Gateway playground using the unified code snippet:
TrueFoundry playground showing unified code snippet with base URL and model name

2. Add TrueFoundry as a Model Provider

1

Open Manage Models

Open the Copilot Chat panel (Ctrl+Alt+I on Windows/Linux, Ctrl+Cmd+I on macOS), click the model dropdown at the top, and select Manage Models….
2

Add OpenAI Compatible Provider

In the Language Models editor, click Add Models and select OpenAI Compatible from the list of providers.
3

Enter TrueFoundry Gateway Details

When prompted, enter the following details:
  • Base URL: Your TrueFoundry Gateway URL (e.g., https://{controlPlaneUrl}/api/llm)
  • API Key: Your TrueFoundry Personal Access Token
After entering the API key, add the models you want to use. Enter the fully qualified model name from TrueFoundry in provider-name/model-name format (e.g., openai-main/gpt-4o).
4

Enable Models

Check the models you want to make available in the Copilot Chat model picker, then click OK.

3. Use TrueFoundry Models in Copilot Chat

Your TrueFoundry models now appear in the model dropdown in Copilot Chat. Select any of them to route requests through the TrueFoundry AI Gateway.
# Ask about your code
> Explain this function and suggest improvements

# Generate code
> Write a REST API endpoint for user authentication

# Debug issues
> Why is this test failing? Suggest a fix
For a model to work in Agent Mode, it must support tool calling. Most large models (GPT-4o, Claude Sonnet, etc.) support this. If a model doesn’t support tool calling, it will only be available in Ask and Edit modes.

Alternative: Settings JSON Configuration

You can also configure custom OpenAI-compatible models directly in your VS Code settings.json using the experimental github.copilot.chat.customOAIModels setting:
{
  "github.copilot.chat.customOAIModels": [
    {
      "id": "openai-main/gpt-4o",
      "name": "GPT-4o via TrueFoundry",
      "baseUrl": "https://{controlPlaneUrl}/api/llm",
      "apiKeySettingId": "TrueFoundry"
    }
  ]
}
Replace the placeholders:
  • {controlPlaneUrl} → Your TrueFoundry Control Plane URL
  • openai-main/gpt-4o → Your desired model in provider-name/model-name format
When you open Copilot Chat after adding this setting, VS Code will prompt you to enter the API key for the “TrueFoundry” provider. Enter your TrueFoundry Personal Access Token.
The github.copilot.chat.customOAIModels setting is currently experimental and only functional in VS Code Insiders. The setting exists in stable VS Code but has no effect.

Alternative: Stable VS Code with Community Extension

If you’re using stable VS Code (not Insiders), you can use the community extension OAI Compatible Provider for Copilot to connect to TrueFoundry:
1

Install the Extension

Search for “OAI Compatible Provider for Copilot” (by johnny-zhao) in the VS Code Extensions marketplace (Ctrl+Shift+X) and install it.
2

Configure the Extension

Add the following to your settings.json:
{
  "oaicopilot.baseUrl": "https://{controlPlaneUrl}/api/llm",
  "oaicopilot.models": [
    {
      "id": "openai-main/gpt-4o",
      "configId": "gpt-4o-tfy",
      "owned_by": "truefoundry"
    }
  ]
}
Replace {controlPlaneUrl} with your TrueFoundry Control Plane URL and update the model id to match your TrueFoundry model.
3

Activate the Model

Open Command Palette (Ctrl+Shift+P), run Chat: Manage Language Models, select OAI Compatible, and enter your TrueFoundry Personal Access Token as the API key. Check the models you want to enable.

Enterprise / Organization BYOK

For GitHub Copilot Business and Enterprise users, organization and enterprise admins can add TrueFoundry as an OpenAI-compatible provider at the org or enterprise level. This makes TrueFoundry-routed models available to all team members through the Copilot Chat model picker.
1

Navigate to Copilot Settings

Go to your GitHub organization or enterprise settings, then navigate to CopilotAI controlsConfigure allowed modelsCustom models tab.
2

Add TrueFoundry API Key

Click Add API key and configure:
  • Provider: Select OpenAI-compatible providers
  • Name: Enter a descriptive name (e.g., “TrueFoundry AI Gateway”)
  • API Key: Enter your TrueFoundry Personal Access Token
  • Available models: Add your desired models using the fully qualified provider-name/model-name format
3

Configure Access

Choose which organizations can access the models and save the configuration. The models will appear in the Copilot Chat model picker for all enabled users, listed under your enterprise or organization name.
With Enterprise BYOK, usage is billed directly through your TrueFoundry account and does not count against GitHub Copilot’s built-in request quotas. This lets teams leverage existing contracts and credits.

Load Balancing Configuration (Optional)

If your models or Copilot setup requires standard model names (e.g., gpt-4o instead of openai-main/gpt-4o), create a routing configuration in TrueFoundry to map standard names to fully qualified model names:
name: copilot-routing-config
type: gateway-load-balancing-config
rules:
  - id: copilot-gpt4o-routing
    type: weight-based-routing
    when:
      models:
        - gpt-4o
    load_balance_targets:
      - target: openai-main/gpt-4o
        weight: 100
This ensures that requests for gpt-4o are automatically routed to openai-main/gpt-4o through the TrueFoundry Gateway.