<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Access ADLS with serverless. CONFIG_NOT_AVAILABLE error in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/access-adls-with-serverless-config-not-available-error/m-p/111476#M43906</link>
    <description>&lt;P&gt;The recommended approach for accessing cloud storage is to create Databricks&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-storage-credentials#" target="_self"&gt;storage credentials&lt;/A&gt;. These storage credentials can refer to entra service principals, managed identities, etc. After a credential is created, create an &lt;A href="https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-external-locations#" target="_self"&gt;external location&lt;/A&gt;. When this is done, you will be able to access the ADLS location without any additional configuration.&lt;/P&gt;</description>
    <pubDate>Fri, 28 Feb 2025 16:39:10 GMT</pubDate>
    <dc:creator>cgrant</dc:creator>
    <dc:date>2025-02-28T16:39:10Z</dc:date>
    <item>
      <title>Access ADLS with serverless. CONFIG_NOT_AVAILABLE error</title>
      <link>https://community.databricks.com/t5/data-engineering/access-adls-with-serverless-config-not-available-error/m-p/111417#M43890</link>
      <description>&lt;P&gt;I have my own Autoloader repo and this repo is responsible for ingestion data from landing layer(ADLS) and load data into raw layer in Databricks. In that repo, I created a couple of workflows, and run these workflows with serverless cluster. and I use whl python package as d&lt;SPAN&gt;ependent libraries in my tasks.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;I have NCC connection but still getting error. Becuase I have a couple of spark configuration in this repo.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;I set the following configurations in py file:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;def set_storage_account_config(
    storage_account: str,
    secret_scope: str,
    spn_tenant_id_key: str,
    spn_client_id_key: str,
    spn_client_secret_key: str,
) -&amp;gt; None:
    """
    This function will fetch the SPN information from key vault using the provided key
    names and scope and use it to configure Spark to the use this SPN when connection
    to the given ADLS Gen2 storage account.
    """
    logger.info(f"Setting spark config for storage account '{storage_account}'")

    spn_tenant_id = dbutils.secrets.get(scope=secret_scope, key=spn_tenant_id_key)
    spn_client_id = dbutils.secrets.get(scope=secret_scope, key=spn_client_id_key)
    spn_client_secret = dbutils.secrets.get(scope=secret_scope, key=spn_client_secret_key)

    spark.conf.set(
        f"fs.azure.account.auth.type.{storage_account}.dfs.core.windows.net", "OAuth"
    )
    spark.conf.set(
        f"fs.azure.account.oauth.provider.type.{storage_account}.dfs.core.windows.net",
        "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider",
    )
    spark.conf.set(
        f"fs.azure.account.oauth2.client.id.{storage_account}.dfs.core.windows.net",
        spn_client_id,
    )
    spark.conf.set(
        f"fs.azure.account.oauth2.client.secret.{storage_account}.dfs.core.windows.net",
        spn_client_secret,
    )
    spark.conf.set(
        f"fs.azure.account.oauth2.client.endpoint.{storage_account}.dfs.core.windows.net",
        f"https://login.microsoftonline.com/{spn_tenant_id}/oauth2/token",
    )&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and this adls config in other py file:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;def set_delta_table_properties(delta_table_properties: dict) -&amp;gt; None:
    """
    This function will take a dictionary of delta table properties and set each of them
    as spark session defaults. For a complete list check this:
    https://docs.databricks.com/en/delta/table-properties.html. This function should be
    called only once before loading any of the sources.
    """
    logger.info("Setting spark session delta table properties")
    logger.debug(f"Using this config '{delta_table_properties}'")

    # Set the properties for the spark session
    for k, v in delta_table_properties.items():
        logger.debug(f"Setting 'spark.databricks.delta.properties.defaults.{k}' to '{v}'")
        spark.sql(f"set spark.databricks.delta.properties.defaults.{k} = {v}")&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However, when I perform the workflow on a&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;serverless&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;environment, I get the following error:&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;Error:&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;[CONFIG_NOT_AVAILABLE] Configuration fs.azure.account.auth.type.adlsxxxxxx.dfs.core.windows.net is not available. SQLSTATE: 42K0I&lt;/P&gt;&lt;P&gt;How can I access files stored in ADLS with serverless?&lt;/P&gt;&lt;P&gt;Thank you.&lt;/P&gt;</description>
      <pubDate>Thu, 27 Feb 2025 23:23:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/access-adls-with-serverless-config-not-available-error/m-p/111417#M43890</guid>
      <dc:creator>mstfkmlbsbdk</dc:creator>
      <dc:date>2025-02-27T23:23:33Z</dc:date>
    </item>
    <item>
      <title>Re: Access ADLS with serverless. CONFIG_NOT_AVAILABLE error</title>
      <link>https://community.databricks.com/t5/data-engineering/access-adls-with-serverless-config-not-available-error/m-p/111476#M43906</link>
      <description>&lt;P&gt;The recommended approach for accessing cloud storage is to create Databricks&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-storage-credentials#" target="_self"&gt;storage credentials&lt;/A&gt;. These storage credentials can refer to entra service principals, managed identities, etc. After a credential is created, create an &lt;A href="https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-external-locations#" target="_self"&gt;external location&lt;/A&gt;. When this is done, you will be able to access the ADLS location without any additional configuration.&lt;/P&gt;</description>
      <pubDate>Fri, 28 Feb 2025 16:39:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/access-adls-with-serverless-config-not-available-error/m-p/111476#M43906</guid>
      <dc:creator>cgrant</dc:creator>
      <dc:date>2025-02-28T16:39:10Z</dc:date>
    </item>
  </channel>
</rss>

