Note: Only elvex Admin users have permissions to add AI providers.
Initial Setup Requirement
Important: When your organization is first invited to elvex, the person who receives the initial invitation from the elvex team must set up at least one AI provider with a valid API key before anyone can access elvex. Until this initial provider is configured, elvex will not be accessible to any users in your organization.
After the initial AI provider is set up, additional providers can be added at any time through Settings by users with Admin permissions.
Supported AI Providers
elvex currently supports the following AI providers:
OpenAI
Azure OpenAI
Azure AI Foundry
Anthropic
Google Gemini
Cohere
Mistral
AWS Bedrock
xAI (Grok)
Additional providers will be supported in the future.
Who Can Add AI Providers
Only elvex Admin users have permissions to add AI providers.
Step-by-Step Guide to Adding an AI Provider
Navigate to Settings by clicking on "Settings" from the left navigation menu
Select AI Providers in the Settings menu
Click Add a Provider to begin adding a new provider
Enter Provider Details by filling in the required fields:
Provider Name: If left blank, this field will default to the name of the provider (e.g., OpenAI, Anthropic, etc.). You can add a more descriptive name if needed
API Key: The API key provided by the AI provider
Create a New Agent to test the integration—a simple passthrough agent to GPT-4o would suffice. [Click here to learn more about creating agents]
Send a Test Message using the newly created agent to ensure that the AI provider is functioning correctly
Finding API Keys for Common Providers
For instructions on how to create an API key, refer to the documentation for your chosen provider:
Important Notes on Generating an API Key from an AI Provider
Read & Write Access: elvex will need read and write access to connect with the AI provider of your choice. If the API key you generate for the AI provider does not have read and write access, the connection between elvex and that provider will not be successful.
Fund Your Provider Account: The account you generate the AI provider API key from must be funded and connected with a valid credit card, otherwise the API key will generate an error when trying to use it.
Getting AWS Bedrock Credentials and Model ID
If you wish to use AWS Bedrock to host your models, we recommend following the AWS Getting Started document.
For a quick step-by-step tutorial, follow these steps to get the Access Key ID, Secret Access Key, and Model ID you'll need to use AWS Bedrock.
1. Create an IAM User (Programmatic Access)
Go to the AWS Management Console → IAM → Users → Create user
Enable Programmatic access (this generates access keys)
Attach a policy that allows Bedrock use:
For testing: AmazonBedrockFullAccess
For production: create a least-privilege policy with only the actions you need (
bedrock:InvokeModel,bedrock:ListFoundationModels, etc.)
2. Create (or View) the Access Keys
After creating the IAM user, open the user page
Go to the Security credentials tab → Create access key
Copy and save both the Access Key ID and Secret Access Key
Store these access keys securely for later use
Note: The Secret Access Key is only shown once. If lost, you must generate a new one.
3. Enable Model Access
In the AWS Console, go to Amazon Bedrock
Navigate to Model access (under Bedrock configurations)
Click Modify model access and request access for the models you need
Accept any required agreements (e.g., provider EULAs)
Wait for access to be granted (usually a few minutes)
4. Find the Model ID
Go to the Bedrock console → Base models list
Click a model to view its Model ID
5. Set up an elvex Provider
Go to elvex Settings → AI Providers
Click Add a Provider
Choose Bedrock
Enter an optional Name, your AWS Access Key ID, AWS Secret Access Key, and Model ID
Click Add
You will now be able to choose this model as a provider in your elvex agents!
Adding Azure OpenAI
When adding Azure OpenAI, you will need to provide specific configuration details from your Azure portal in addition to the standard fields.
Select Azure OpenAI from the Provider dropdown menu
Enter a name to help identify this provider (if left blank, it defaults to "OpenAI")
Enter the Endpoint URL for your resource
This typically follows the format:
https://{resource-name}.openai.azure.com
Enter your API Key
You can find this value in the Keys & Endpoint section when examining your resource from the Azure portal
Enter the Deployment Name
This value corresponds to the custom name you chose for your deployment when you deployed a model
You can find this value under Resource Management > Deployments in the Azure portal or alternatively under Management > Deployments in Azure OpenAI Studio
Select a Model from the dropdown menu
Note: This model selector is for display purposes only. Each deployment has a model associated with it on the Azure side. The actual model used by the agent will be whatever model is configured in your Azure deployment settings, regardless of what is selected here
Adding Azure AI Foundry
When adding Azure AI Foundry, you will need to provide specific configuration details from your Azure AI Foundry portal. Azure AI Foundry allows you to deploy and access models from various providers, including Anthropic's Claude models.
Finding Your Azure AI Foundry Configuration Details
Before adding Azure AI Foundry as a provider in elvex, you'll need to gather the following information from your Azure AI Foundry deployment:
Navigate to your Azure AI Foundry project in the Azure portal
Select your deployment from the deployments list
Click on the Details tab to view your deployment information
Locate the following values:
Target URI - This is your endpoint URL
Key - This is your API key
Name (under Deployment info) - This is your deployment name
Adding Azure AI Foundry in elvex
Go to Settings > AI Providers
Click Add a Provider
Select Azure AI Foundry from the Provider dropdown menu
Enter a name (Optional)
Enter a name to help identify this provider. If left blank, it defaults to "Azure AI Foundry"
Enter the Endpoint URL
Use the Target URI value from your Azure AI Foundry deployment details. This typically follows the format:
https://{resource-name}.services.ai.azure.com/anthropic/v1/messagesImportant: Do not use the "Project's deployment endpoint" URL. You must use the Target URI that ends with
/anthropic/v1/messagesor the appropriate path for your model provider
Enter your API Key
Copy the Key value from your Azure AI Foundry deployment details. Click the copy button in the Azure portal to ensure you capture the complete key without any missing characters
Enter the Model Name
Enter the Name value from the Deployment info section of your Azure AI Foundry deployment details (for example:
hhcs-insights-claude-opus-4-6)Important: This should be your deployment name, not the underlying model name. Make sure there are no leading or trailing spaces in this field
Select the Output Modality
Choose the appropriate output modality for your deployment:
Text - For text-based models
Image - For image generation models
Click Add
Common Issues and Troubleshooting
Wrong Endpoint URL
Issue: You receive an error stating the deployment does not exist
Solution: Verify you are using the Target URI from your Azure AI Foundry deployment details, not the "Project's deployment endpoint." The correct URL should end with the model provider's API path (e.g.,
/anthropic/v1/messages)
Extra Spaces in Model Name
Issue: elvex cannot find your deployment even though it exists in Azure
Solution: Check that there are no leading or trailing spaces in the Model Name field. The deployment name should be entered exactly as it appears in Azure without any extra whitespace
Incomplete API Key
Issue: You receive a validation error when adding the provider
Solution: Ensure you copied the complete API key from Azure. Use the copy button in the Azure portal rather than manually selecting the text to avoid missing characters
Model Not Available
Issue: The provider is added successfully but the model doesn't work in agents
Solution: Verify that your Azure AI Foundry deployment is active and the model has been successfully deployed. Check the deployment status in the Azure portal
