<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Reading data from Serverless Warehouse from Azure Functions in Python - using managed identities in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/149631#M53139</link>
    <description>&lt;P&gt;We are trying to run a simple service on an Azure Function app, where we need to query some data from a Databricks Warehouse. We want to avoid managing secrets, and hence try to use Microsoft Entra authentication all the way. Using various available online sources - we have tried the following steps.&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Enabled a system assigned "Managed Identity" on the Azure Function app&lt;/LI&gt;&lt;LI&gt;Created a service principal in the Databricks workspace - "Microsoft Entra ID managed". Linked the service principal to the Azure function by pasting in the application_id&amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Given the service principal access to the Warehouse (Use)&lt;/LI&gt;&lt;LI&gt;Give service principal USE access to catalog and schema, SELECT access to relevant table&lt;/LI&gt;&lt;/UL&gt;&lt;LI-CODE lang="markup"&gt;from databricks import sql
from azure.identity import DefaultAzureCredential, AzureCliCredential 
credential=DefaultAzureCredential()
token=credential.get_token("2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default").token
with sql.connect(
    server_hostname=DATABRICKS_HOST, 
    http_path=DATABRICKS_WAREHOSE, 
    access_token=token) as connection:
    with connection.cursor() as cursor:
        cursor.execute(f"SELECT * FROM {CATALOG}.{SCHEMA}.{TABLE}")
        result = cursor.fetchall()
        df=pd.DataFrame(result, columns=[desc[0] for desc in cursor.description])&lt;/LI-CODE&gt;&lt;P&gt;When testing the Azure Function running locally (effectively using AzureCLICredentials and my own access), this works fine. However, when running the code on the function app itself it returns a 403: Forbidden error.&amp;nbsp;&lt;/P&gt;&lt;P&gt;When creating a OAuth secret in the principal, the table can be read using the following code. I guess this confirms that the service principal has the right access in Databricks. However, we would like to avoid managing secrets.&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;from databricks.sdk.core import Config, oauth_service_principal

def credential_provider():
  config = Config(
    host          = f"https://{server_hostname}",
    client_id     = client_id,
    client_secret = token)
  return oauth_service_principal(config)

with sql.connect(server_hostname      = server_hostname,
                 http_path            = DATABRICKS_WAREHOSE,
                 credentials_provider = credential_provider) as connection:
    with connection.cursor() as cursor:
        cursor.execute(f"SELECT * FROM {CATALOG}.{SCHEMA}.{TABLE}")
        result = cursor.fetchall()

        df=pd.DataFrame(result, columns=[desc[0] for desc in cursor.description])
df.head()&lt;/LI-CODE&gt;&lt;P&gt;Any suggestions on further actions? What are we missing if anything?&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 02 Mar 2026 15:50:12 GMT</pubDate>
    <dc:creator>MRTN</dc:creator>
    <dc:date>2026-03-02T15:50:12Z</dc:date>
    <item>
      <title>Reading data from Serverless Warehouse from Azure Functions in Python - using managed identities</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/149631#M53139</link>
      <description>&lt;P&gt;We are trying to run a simple service on an Azure Function app, where we need to query some data from a Databricks Warehouse. We want to avoid managing secrets, and hence try to use Microsoft Entra authentication all the way. Using various available online sources - we have tried the following steps.&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Enabled a system assigned "Managed Identity" on the Azure Function app&lt;/LI&gt;&lt;LI&gt;Created a service principal in the Databricks workspace - "Microsoft Entra ID managed". Linked the service principal to the Azure function by pasting in the application_id&amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Given the service principal access to the Warehouse (Use)&lt;/LI&gt;&lt;LI&gt;Give service principal USE access to catalog and schema, SELECT access to relevant table&lt;/LI&gt;&lt;/UL&gt;&lt;LI-CODE lang="markup"&gt;from databricks import sql
from azure.identity import DefaultAzureCredential, AzureCliCredential 
credential=DefaultAzureCredential()
token=credential.get_token("2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default").token
with sql.connect(
    server_hostname=DATABRICKS_HOST, 
    http_path=DATABRICKS_WAREHOSE, 
    access_token=token) as connection:
    with connection.cursor() as cursor:
        cursor.execute(f"SELECT * FROM {CATALOG}.{SCHEMA}.{TABLE}")
        result = cursor.fetchall()
        df=pd.DataFrame(result, columns=[desc[0] for desc in cursor.description])&lt;/LI-CODE&gt;&lt;P&gt;When testing the Azure Function running locally (effectively using AzureCLICredentials and my own access), this works fine. However, when running the code on the function app itself it returns a 403: Forbidden error.&amp;nbsp;&lt;/P&gt;&lt;P&gt;When creating a OAuth secret in the principal, the table can be read using the following code. I guess this confirms that the service principal has the right access in Databricks. However, we would like to avoid managing secrets.&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;from databricks.sdk.core import Config, oauth_service_principal

def credential_provider():
  config = Config(
    host          = f"https://{server_hostname}",
    client_id     = client_id,
    client_secret = token)
  return oauth_service_principal(config)

with sql.connect(server_hostname      = server_hostname,
                 http_path            = DATABRICKS_WAREHOSE,
                 credentials_provider = credential_provider) as connection:
    with connection.cursor() as cursor:
        cursor.execute(f"SELECT * FROM {CATALOG}.{SCHEMA}.{TABLE}")
        result = cursor.fetchall()

        df=pd.DataFrame(result, columns=[desc[0] for desc in cursor.description])
df.head()&lt;/LI-CODE&gt;&lt;P&gt;Any suggestions on further actions? What are we missing if anything?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 02 Mar 2026 15:50:12 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/149631#M53139</guid>
      <dc:creator>MRTN</dc:creator>
      <dc:date>2026-03-02T15:50:12Z</dc:date>
    </item>
    <item>
      <title>Re: Reading data from Serverless Warehouse from Azure Functions in Python - using managed identities</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/149644#M53144</link>
      <description>&lt;P&gt;We are also trying to follow the instructions given&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/auth/azure-mi-auth" target="_self"&gt;here&lt;/A&gt;, creating a .databrickscfg file with&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;[AZURE_MI_WORKSPACE]
host                        = https://XXX.azuredatabricks.net/
azure_workspace_resource_id = YYY
azure_client_id             = ZZZ
azure_use_msi               = true&lt;/LI-CODE&gt;&lt;P&gt;And removing the token from the function call. This - again - also works locally, while we get the error "&lt;SPAN&gt;Tried all the ports [8020, 8021, 8022, 8023, 8024] for oauth redirect, but can't find free port" when running on an Azure function.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 02 Mar 2026 21:24:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/149644#M53144</guid>
      <dc:creator>MRTN</dc:creator>
      <dc:date>2026-03-02T21:24:33Z</dc:date>
    </item>
    <item>
      <title>Re: Reading data from Serverless Warehouse from Azure Functions in Python - using managed identities</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/149699#M53158</link>
      <description>&lt;P&gt;We figured it out. Need create an Azure User Assigned Identity, and add this to the function app. The we added this user managed identity as a service principal in the Databricks workspace The following code then works&lt;/P&gt;&lt;LI-CODE lang="python"&gt;credential=ManagedIdentityCredential(client_id=USER_ASSIGNED_IDENTITY_CLIENT_ID)
token=credential.get_token("2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default").token

with sql.connect(
   server_hostname=DATABRICKS_HOST, 
    http_path=DATABRICKS_WAREHOSE,
    access_token=token) as connection:
        with connection.cursor() as cursor:
            cursor.execute(f"SELECT 'albert' AS name")
            result = cursor.fetchall()
            df=pd.DataFrame(result, columns=[desc[0] for desc in cursor.description])&lt;/LI-CODE&gt;&lt;P&gt;The USER_ASSIGNED_CLIENT_ID is here the Client ID of the user assigned idenity.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope to be able to use the System Managed Identity in the future, to avoid the extra steps.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Mar 2026 14:21:56 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/149699#M53158</guid>
      <dc:creator>MRTN</dc:creator>
      <dc:date>2026-03-03T14:21:56Z</dc:date>
    </item>
    <item>
      <title>Hi @MRTN, The 403 Forbidden error you are seeing when usi...</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/150342#M53378</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/47856"&gt;@MRTN&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;The 403 Forbidden error you are seeing when using DefaultAzureCredential from the Azure Function (while it works locally with AzureCliCredential) comes down to a key distinction in how the token is being used and who the token represents.&lt;/P&gt;
&lt;P&gt;UNDERSTANDING THE ISSUE&lt;/P&gt;
&lt;P&gt;When you run locally with AzureCliCredential, the token is issued for your personal user identity (which is already added to the Databricks workspace). When the Azure Function runs with its system-assigned managed identity, the token is issued for a different identity, the managed identity itself. Even though you created a service principal in Databricks and linked it by application_id, there is an important nuance: the managed identity and the Microsoft Entra ID managed service principal may not be the same entity in Entra ID unless they share the same application (client) ID.&lt;/P&gt;
&lt;P&gt;Here is what to verify and a recommended approach.&lt;/P&gt;
&lt;P&gt;STEP 1: CONFIRM THE IDENTITY ALIGNMENT&lt;/P&gt;
&lt;P&gt;The system-assigned managed identity of your Azure Function has its own Object ID and a corresponding Application ID in Entra ID. When you created the service principal in Databricks as "Microsoft Entra ID managed," you need to ensure the Application ID you used matches exactly the client ID of the Azure Function's managed identity.&lt;/P&gt;
&lt;P&gt;To find your managed identity's client ID:&lt;/P&gt;
&lt;PRE&gt;- Go to the Azure Portal
- Navigate to your Function App
- Click Identity in the left panel
- Under "System assigned," note the Object ID
- You can find the corresponding Application ID in Enterprise Applications in Entra ID by searching for the Object ID&lt;/PRE&gt;
&lt;P&gt;The Application ID of that enterprise application entry is what you must use when adding the service principal to Databricks.&lt;/P&gt;
&lt;P&gt;STEP 2: VERIFY THE TOKEN AUDIENCE/SCOPE&lt;/P&gt;
&lt;P&gt;In your code, you are requesting a token with this scope:&lt;/P&gt;
&lt;PRE&gt;credential = DefaultAzureCredential()
token = credential.get_token("2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default").token&lt;/PRE&gt;
&lt;P&gt;The resource ID 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d is correct for Azure Databricks. That part looks right.&lt;/P&gt;
&lt;P&gt;STEP 3: USE THE MICROSOFT ENTRA ID TOKEN AUTH PATTERN CORRECTLY&lt;/P&gt;
&lt;P&gt;When you pass the token via access_token to the SQL connector, the connector treats it as a Microsoft Entra ID token (same as a PAT in terms of the parameter). This should work, but the token must represent an identity that is recognized in the Databricks workspace. If the managed identity's Application ID does not match the service principal registered in Databricks, the workspace will reject it with 403.&lt;/P&gt;
&lt;P&gt;STEP 4: CONSIDER USING OAUTH M2M AS A CREDENTIALS PROVIDER (RECOMMENDED)&lt;/P&gt;
&lt;P&gt;If you want a fully secretless approach that is well-supported by the Databricks SQL Connector, you can combine Azure's DefaultAzureCredential with a custom credentials provider. This avoids passing a static access_token and instead lets the connector handle token refresh through the credentials_provider pattern:&lt;/P&gt;
&lt;PRE&gt;from databricks import sql
from azure.identity import DefaultAzureCredential

DATABRICKS_RESOURCE_ID = "2ff814a6-3304-4ab8-85cb-cd0e6f879c1d"

def credential_provider():
  credential = DefaultAzureCredential()
  def inner():
      token = credential.get_token(f"{DATABRICKS_RESOURCE_ID}/.default")
      return {"Authorization": f"Bearer {token.token}"}
  return inner

with sql.connect(
  server_hostname=DATABRICKS_HOST,
  http_path=DATABRICKS_WAREHOUSE,
  credentials_provider=credential_provider
) as connection:
  with connection.cursor() as cursor:
      cursor.execute(f"SELECT * FROM {CATALOG}.{SCHEMA}.{TABLE}")
      result = cursor.fetchall()&lt;/PRE&gt;
&lt;P&gt;This approach uses the credentials_provider parameter instead of access_token, which gives the connector the ability to refresh the token automatically. It also lets DefaultAzureCredential resolve the correct credential source: AzureCliCredential locally, and ManagedIdentityCredential when deployed to the Azure Function.&lt;/P&gt;
&lt;P&gt;STEP 5: ALTERNATIVE, EXPLICIT MANAGED IDENTITY CREDENTIAL&lt;/P&gt;
&lt;P&gt;If you want to be fully explicit about using the managed identity (and avoid fallback behavior of DefaultAzureCredential), you can use ManagedIdentityCredential directly:&lt;/P&gt;
&lt;PRE&gt;from databricks import sql
from azure.identity import ManagedIdentityCredential

DATABRICKS_RESOURCE_ID = "2ff814a6-3304-4ab8-85cb-cd0e6f879c1d"

# For system-assigned managed identity, no client_id needed
credential = ManagedIdentityCredential()

# For user-assigned managed identity, pass the client_id:
# credential = ManagedIdentityCredential(client_id="&amp;lt;your-managed-identity-client-id&amp;gt;")

def credential_provider():
  def inner():
      token = credential.get_token(f"{DATABRICKS_RESOURCE_ID}/.default")
      return {"Authorization": f"Bearer {token.token}"}
  return inner

with sql.connect(
  server_hostname=DATABRICKS_HOST,
  http_path=DATABRICKS_WAREHOUSE,
  credentials_provider=credential_provider
) as connection:
  with connection.cursor() as cursor:
      cursor.execute(f"SELECT * FROM {CATALOG}.{SCHEMA}.{TABLE}")
      result = cursor.fetchall()&lt;/PRE&gt;
&lt;P&gt;DEBUGGING TIPS&lt;/P&gt;
&lt;P&gt;1. Decode your token to inspect it. You can use jwt.ms or the following Python code to see what identity the token represents:&lt;/P&gt;
&lt;PRE&gt;import jwt
decoded = jwt.decode(token, algorithms=["RS256"], options={"verify_signature": False})
print(decoded.get("appid"))  # This should match the SP registered in Databricks
print(decoded.get("aud"))    # Should be 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d&lt;/PRE&gt;
&lt;P&gt;2. Confirm the appid from the decoded token matches the Application ID of the service principal you added in Databricks.&lt;/P&gt;
&lt;P&gt;3. Double-check that the service principal has CAN USE permission on the SQL warehouse, plus USE CATALOG, USE SCHEMA, and SELECT on the target table.&lt;/P&gt;
&lt;P&gt;RELEVANT DOCUMENTATION&lt;/P&gt;
&lt;P&gt;- Databricks SQL Connector authentication: &lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector&lt;/A&gt;&lt;BR /&gt;
- Microsoft Entra ID token auth: &lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/auth/aad-token-manual" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/dev-tools/auth/aad-token-manual&lt;/A&gt;&lt;BR /&gt;
- Azure managed identities with Databricks: &lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/auth/azure-mi-auth" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/dev-tools/auth/azure-mi-auth&lt;/A&gt;&lt;BR /&gt;
- Managed identities for Azure resources overview: &lt;A href="https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview" target="_blank"&gt;https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;
&lt;P&gt;If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Mar 2026 05:50:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-data-from-serverless-warehouse-from-azure-functions-in/m-p/150342#M53378</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-09T05:50:05Z</dc:date>
    </item>
  </channel>
</rss>

