Adding Models
This section explains the steps to add AWS Bedrock models and configure the required access controls.1
Navigate to AWS Bedrock Models in AI Gateway
From the TrueFoundry dashboard, navigate to 
AI Gateway > Models and select AWS Bedrock.
2
Add AWS Bedrock Account Name and Collaborators
Give a unique name for the bedrock account which will be used to refer later in the models. The models in the account will be referred to as 
@providername/@modelname. Add collaborators to your account. You can decide which users/teams have access to the models in the account (User Role) and who can add/edit/remove models in this account (Manager Role). You can read more about access control here.
3
Add Region and Authentication
Select the default AWS region for the models in this account. The account-level region serves as the default for all models unless explicitly overridden at the model level. Provide the authentication details on how the gateway can access the Bedrock models. Truefoundry supports AWS Access Key/Secret Key, Assume Role, and API Key based authentication. You can read below on how to generate the access/secret keys, roles, or API keys.
Get AWS Authentication Details
Get AWS Authentication Details
Required IAM PolicyFirst, create the IAM policy that grants permission to invoke Bedrock models. This policy can be attached to an IAM user (for access key or API key authentication) or an IAM role (for assumed role authentication).The following policy grants permission to invoke all models in your available regions (To check the list of available regions for different models, refer to AWS Bedrock):Using AWS Access Key and Secret
- Create an IAM user (or choose an existing IAM user) following these steps.
- Attach the IAM policy created above to this user.
- Create an access key for this user as per this doc.
- Use this access key and secret while adding the provider account to authenticate requests to the Bedrock model.
- Create an IAM role in your AWS account that has access to Bedrock. Attach the IAM policy with Bedrock permissions (shown above) to this role.
- Configure the trust policy for this role to allow the gateway role to assume it. Use the appropriate role ARN based on your deployment:
- Gateway role ARN:
arn:aws:iam::416964291864:role/tfy-ctl-production-ai-gateway-deps
- Your gateway role ARN will look like:
arn:aws:iam::<your-aws-account-id>:role/<account-prefix>-truefoundry-deps

Replace the Principal AWS ARN in the trust policy with the appropriate gateway role ARN based on your deployment type (SAAS or on-prem).
You can optionally configure an external ID in the trust policy (as shown in the example above) for additional security. If you use an external ID, make sure to provide the same external ID when creating the Bedrock model integration in TrueFoundry.
- Read more about how assumed roles work here.
- Navigate to the AWS Management Console and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock.
- In the left navigation pane, select API keys.
- Choose Generate long-term API keys in the Long-term API keys tab.
- In the API key expiration section, choose a time after which the key will expire.
- Choose Generate and copy the API key value.
- Use this API key while adding the provider account to authenticate requests to the Bedrock model.
For more information on generating API keys, see the AWS Bedrock API key generation documentation. For details on required permissions, refer to the IAM credentials for Bedrock documentation.
4
Add Models
Select the models from the list that you want to add. You can use
Select All to select all the models.If the model you are looking for is not present in the options, you can add it using
+ Add Model at the end of list.TrueFoundry AI Gateway supports all text and image models in Bedrock.The complete list of models supported by Bedrock can be found here.
Inference
After adding the models, you can perform inference using an OpenAI-compatible API via the Playground or integrate with your own application.
FAQ:
How to override the default cost of models?
How to override the default cost of models?
In case you have custom pricing for your models, you can override the default cost by clicking on Edit Model button and then choosing the 

Private Cost Metric option.

Can I add models from different regions in a single bedrock integration?
Can I add models from different regions in a single bedrock integration?
Yes, you can add models from different regions. You can provide a top level default region for the account and also override it at the model level.

How to integrate Bedrock cross-region inference model?
How to integrate Bedrock cross-region inference model?
What is Cross-Region Inference?Cross-Region Inference dynamically routes your inference requests across multiple AWS regions to optimize performance and handle traffic bursts. Bedrock selects the best region based on load, latency, and availability. Learn more in the AWS Bedrock Cross-Region Inference documentation.Key Difference: Inference Profile ID vs Model IDTo use cross-region inference, you must use an Inference Profile ID instead of a regular model ID. Inference profiles define the foundation model and the AWS regions where requests can be routed.
2. Configure IAM permissions for ALL destination regions: This is critical. When Bedrock routes to a different region, your IAM role/access key must have permissions in that region. You must grant permissions for both the inference profile and the foundation model in all destination regions.Update your IAM policy to use 3. Check Service Control Policies (SCPs): If your organization uses SCPs to restrict region access, ensure they allow access to all destination regions in your inference profile. Blocking any destination region will prevent cross-region inference from working.Troubleshooting
Learn MoreFor detailed AWS documentation, see:
- Regular Model ID:
anthropic.claude-3-5-sonnet-20240620-v1:0(single region) - Inference Profile ID:
us.anthropic.claude-3-5-sonnet-20240620-v1:0(cross-region routing)
-
System-defined geographic profiles: Use geographic prefixes (
us.,eu.,apac.) followed by the model ID (e.g.,us.anthropic.claude-3-5-sonnet-20240620-v1:0). The prefix indicates routing within that geography. -
Custom inference profiles: Use full ARN format (e.g.,
arn:aws:bedrock:us-east-1:123456789012:inference-profile/my-profile)Important: Some models in AWS Bedrock are exclusively accessible through cross-region inference profiles and cannot be invoked directly using their standard foundation model IDs. For these models, you must use the inference profile ID (e.g.,us.anthropic.claude-3-7-sonnet-20250219-v1:0) instead of the regular model ID.To identify which models require inference profiles, refer to the Supported Regions and models for inference profiles, which provides a complete list of models and their inference profile availability.
us.anthropic.claude-3-5-sonnet-20240620-v1:0) instead of the regular model ID. If it’s not in the dropdown, use + Add Model and enter it manually.
* for the region to allow access across all regions:Replace
YOUR-AWS-ACCOUNT-ID in the policy above with your actual AWS account ID. The * in the region position allows access across all regions.Request fails with 'Access Denied' error
Request fails with 'Access Denied' error
Cause: Missing IAM permissions in the destination region where Bedrock routed the request.Solution:
- Ensure your IAM policy grants Bedrock permissions across all regions (use
*in the region part of the ARN) - For geographic profiles, grant permissions in both source and destination regions
- Check if Service Control Policies (SCPs) are blocking access to certain regions
Requests always go to the same region
Requests always go to the same region
Cause: You’re using a regular model ID instead of an inference profile ID.Solution: Use the inference profile ID format (e.g.,
us.anthropic.claude-3-5-sonnet-20240620-v1:0) instead of the regular model ID.- Cross-Region inference overview - How cross-region inference works
- Geographic cross-Region inference - Geographic boundary routing and IAM policy requirements
- Using inference profiles - How to use inference profiles in API calls
- Supported models and regions - List of models that support cross-region inference