cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to Fetch Azure OpenAI api_version and engine Dynamically After Resource Creation via Python?

Sudheer2
New Contributor III

Hello,

I am using Python to automate the creation of Azure OpenAI resources via the Azure Management API. I am successfully able to create the resource, but I need to dynamically fetch the following details after the resource is created:

  1. API Version (api_version)
  2. Engine (such as text-davinci-003, gpt-3.5-turbo, etc.)

Here are the steps I have taken so far:

  1. Created Resource: I use the Azure Management API to create the OpenAI resource (PUT request).
  2. Fetch API Keys: After the resource is created, I fetch the API keys by calling listKeys.

However, when the resource is created, I am unable to dynamically retrieve the API version and engine directly from the resource.

Problem:

  • The api_version returned is showing as Unknown even though I know there is a specific version associated with the OpenAI resource.
  • I am unable to retrieve the engine types dynamically (like text-davinci-003, gpt-3.5-turbo, etc.) after the resource creation.

What I’ve Tried:

  • I tried using the exportTemplate API, but it only gives me a template and not the specific api_version and engine I need.
  • I also tried checking the resource properties, but the api_version field remains Unknown.
  • I even tried making an API call to https://<resource_name>.openai.azure.com/v1/engines, but I receive DNS resolution issues.

My Goal:

I would like to automate the process of retrieving the API version and engine dynamically after the resource is created, using Python and Azure API.

Is there a way to retrieve this information directly after the resource creation or via a different API endpoint?

Any help or guidance would be greatly appreciated!

Thank you!

1 REPLY 1

jamesl
Databricks Employee
Databricks Employee

Hi Sudheer, 

It's been a while since you posted, but are you still facing this issue? Here are a few things you could check if needed: 

API version

In Azure OpenAI, api-version is a query parameter on the data-plane (inference) requests, not a property stored on the resource. You supply and pin it per request, e.g., ?api-version=2024-10-21 

Programmatically, keep a mapping in code to your approved versions (e.g., latest GA vs. preview) based on Microsoft’s “API lifecycle” guidance, rather than reading it from the ARM resource.

If you’re integrating through Databricks External Models, set openai_api_version on the endpoint configuration to the pinned version (for example, 2023-05-15 or the current GA you use).

Engine/model enumeration after resource creation

Azure OpenAI has moved away from the legacy “v1/engines” surface. In Azure the concept you target in requests is your deployment of a model (e.g., gpt-35-turbo), using paths like /openai/deployments/{deployment-id}/chat/completions (or completions/embeddings). To know what you can call, you:

  • List your resource’s deployments (control-plane; via Azure management API/CLI), or
  • List available models for your region using the models catalog (concept docs) to choose models to deploy, then deploy and query by deployment name.

In Databricks External Models, point to a deployment by filling openai_deployment_name and use the unified OpenAI-compatible request format. Databricks forwards to Azure OpenAI and centrally manages credentials and governance.

Endpoint path and fixing the DNS/engines error

The Azure OpenAI data-plane requests use the openai/deployments path instead of v1/engines. For example:

POST https://{resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions?api-version=2024-10-21
This should resolve correctly assuming the hostname is reachable in your network. If you’re behind private networking (Private Endpoint, firewall/VNet restrictions), ensure your client’s DNS/network allows resolution to {resource}.openai.azure.com; otherwise you’ll see failures unrelated to the path itself.

Top next steps to try

  • Use the correct Azure OpenAI data-plane path and explicitly pin an api-version. Azure OpenAI requests should use /openai/deployments/{deployment}/... (not v1/engines) and include ?api-version=... (for example, 2024-10-21).

  • Query by a model deployment name (your deployed model, e.g., gpt-35-turbo) rather than “engine.” In Azure OpenAI you call the deployment, not a generic engine list. Ensure your code uses the deployment ID you created and passes it in the URL path.

  • Centralize the config in Databricks External Models so the base URL, deployment name, and version are pinned once. Set openai_api_base, openai_deployment_name, and openai_api_version in the endpoint config. Databricks will forward requests and manage governance and secrets for you.

  • If you saw DNS or connect failures, verify workspace networking vs. Azure OpenAI. Private endpoints/firewall rules can block outbound resolution. Make sure the hostname https://{resource}.openai.azure.com is reachable or configure the approved network path.

I hope that helps, but if not then let us know!

-James

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now