Hi Sudheer,
It's been a while since you posted, but are you still facing this issue? Here are a few things you could check if needed:
API version
In Azure OpenAI, api-version is a query parameter on the data-plane (inference) requests, not a property stored on the resource. You supply and pin it per request, e.g., ?api-version=2024-10-21
Programmatically, keep a mapping in code to your approved versions (e.g., latest GA vs. preview) based on Microsoft’s “API lifecycle” guidance, rather than reading it from the ARM resource.
If you’re integrating through Databricks External Models, set openai_api_version on the endpoint configuration to the pinned version (for example, 2023-05-15 or the current GA you use).
Engine/model enumeration after resource creation
Azure OpenAI has moved away from the legacy “v1/engines” surface. In Azure the concept you target in requests is your deployment of a model (e.g., gpt-35-turbo), using paths like /openai/deployments/{deployment-id}/chat/completions (or completions/embeddings). To know what you can call, you:
- List your resource’s deployments (control-plane; via Azure management API/CLI), or
- List available models for your region using the models catalog (concept docs) to choose models to deploy, then deploy and query by deployment name.
In Databricks External Models, point to a deployment by filling openai_deployment_name and use the unified OpenAI-compatible request format. Databricks forwards to Azure OpenAI and centrally manages credentials and governance.
Endpoint path and fixing the DNS/engines error
The Azure OpenAI data-plane requests use the openai/deployments path instead of v1/engines. For example:
POST https://{resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions?api-version=2024-10-21
This should resolve correctly assuming the hostname is reachable in your network. If you’re behind private networking (Private Endpoint, firewall/VNet restrictions), ensure your client’s DNS/network allows resolution to {resource}.openai.azure.com; otherwise you’ll see failures unrelated to the path itself.
Top next steps to try
-
Use the correct Azure OpenAI data-plane path and explicitly pin an api-version. Azure OpenAI requests should use /openai/deployments/{deployment}/... (not v1/engines) and include ?api-version=... (for example, 2024-10-21).
-
Query by a model deployment name (your deployed model, e.g., gpt-35-turbo) rather than “engine.” In Azure OpenAI you call the deployment, not a generic engine list. Ensure your code uses the deployment ID you created and passes it in the URL path.
-
Centralize the config in Databricks External Models so the base URL, deployment name, and version are pinned once. Set openai_api_base, openai_deployment_name, and openai_api_version in the endpoint config. Databricks will forward requests and manage governance and secrets for you.
-
If you saw DNS or connect failures, verify workspace networking vs. Azure OpenAI. Private endpoints/firewall rules can block outbound resolution. Make sure the hostname https://{resource}.openai.azure.com is reachable or configure the approved network path.
I hope that helps, but if not then let us know!
-James