<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Databricks UMF Best Practice in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/115181#M45041</link>
    <description>&lt;DIV class="paragraph"&gt;I am not an expert on this topic or Azure services but I did some research and have some suggested courses of action for you to test out.&amp;nbsp; To address your request for suggested ways to get User Managed Files (UMF) from Azure into Databricks, here are some key approaches and perspectives based on the context and search results:&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Suggested Approaches for Ingesting UMF Data: 1. &lt;STRONG&gt;Microsoft Graph API&lt;/STRONG&gt;: - Using the Microsoft Graph API for retrieving user-generated content, such as files from OneDrive, SharePoint, and Teams, is a viable option. - Challenges specific to API usage in serverless environments were noted in Slack discussions, including cases where API calls from within Databricks mapInPandas functions resulted in tasks stalling. For example, the discussion highlighted that serverless clusters may have restricted outbound internet connectivity by default. You can reference the details mentioned in Slack threads for more context (&lt;A href="https://databricks.slack.com/archives/C0463EUAM7H/p1738036880636039" target="_blank"&gt;link&lt;/A&gt;).&lt;/DIV&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Azure Data Factory (ADF)&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;ADF can be used to ingest files from Azure services directly into Azure Data Lake Storage (ADLS), from which Databricks can pick up UMF data. ADF supports connectors for extracting from sources like SharePoint or OneDrive and can be combined with Databricks for further processing.&lt;/LI&gt;
&lt;LI&gt;For such UMF ingestion, setting up data pipelines or configuring triggers within ADF can automate the ETL process. ADF’s OData connectors might support tasks involving changing files.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Databricks Auto Loader&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;Auto Loader can be an effective tool to ingest dynamically changing UMF files into Delta Lake. It supports continuous file monitoring in ADLS or Blob storage and is highly scalable for workflows where users upload new files regularly.&lt;/LI&gt;
&lt;LI&gt;For example: &lt;CODE&gt;python
df = spark.readStream.format("cloudFiles") \
    .option("cloudFiles.format", "csv") \
    .load("path_to_azure_storage")
&lt;/CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Unity Catalog with External Tables or Volumes&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;Depending on how the files are stored, external locations or volumes registered in Unity Catalog can manage access and governance for UMF data. You might also consider using Volumes to provide a space for handling raw UMF data before processing.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Integration Using Python Libraries&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;Databricks supports accessing Azure storage layers through standard libraries like Azure SDK. This facilitates scripting for downloading/uploading dynamic files (e.g., CSV) into Databricks workspaces.&lt;/LI&gt;
&lt;LI&gt;Example Python code to fetch files: ```python from azure.identity import DefaultAzureCredential from azure.storage.blob import BlobServiceClient&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;blob_service_client = BlobServiceClient(account_url="https://&amp;lt;storage_acct&amp;gt;.blob.core.windows.net", credential=DefaultAzureCredential()) container_client = blob_service_client.get_container_client("container_name")&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;blob_list = container_client.list_blobs() for blob in blob_list: # Logic for file filtering based on modification download_stream = container_client.download_blob(blob.name) with open(blob.name, "wb") as file: file.write(download_stream.readall()) ```&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="paragraph"&gt;Recommendations: - &lt;STRONG&gt;Best Method Depends on Use Case&lt;/STRONG&gt;: If your UMF data resides in OneDrive or SharePoint, leveraging the Graph API might be one of the better options, provided that any potential bottlenecks (e.g., serverless tasks) are resolved. For ADLS or Blob storage, Auto Loader and Databricks-native integration tools offer streamlined solutions. - &lt;STRONG&gt;Consider Governance, Scalability &amp;amp; Security&lt;/STRONG&gt;: Ensure clear access policies for sensitive modifications and utilize Azure features like Private Link or ADLS Gen2-specific mechanisms. - &lt;STRONG&gt;Continuous Improvement&lt;/STRONG&gt;: If initial tests indicate performance or reliability issues, explore tools such as Azure Data Factory or third-party solutions like Qlik Replicate, which has demonstrated strong integration capabilities with Databricks and Azure ecosystems.&lt;/DIV&gt;</description>
    <pubDate>Thu, 10 Apr 2025 15:47:21 GMT</pubDate>
    <dc:creator>Louis_Frolio</dc:creator>
    <dc:date>2025-04-10T15:47:21Z</dc:date>
    <item>
      <title>Databricks UMF Best Practice</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/112379#M44194</link>
      <description>&lt;P&gt;Hi there, I would like to get some feedback on what are the ideal/suggested ways to get UMF data from our Azure cloud into Databricks. For context, UMF can mean either:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;User Managed File&lt;/LI&gt;&lt;LI&gt;User Maintained File&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Basically, a UMF could be something like a simple CSV that we know may change over time based on the latest file that the user uploads.&lt;/P&gt;&lt;P&gt;One method I'm exploring is using the Microsoft Graph API in order to pull user generated content wherever it may be (e.g. OneDrive, SharePoint, Teams, etc). However, before I proceed with using the Microsoft Graph API, I'd like to check if others in this community have found better/standard ways to pull in UMF data into Databricks.&lt;/P&gt;</description>
      <pubDate>Wed, 12 Mar 2025 14:17:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/112379#M44194</guid>
      <dc:creator>ChristianRRL</dc:creator>
      <dc:date>2025-03-12T14:17:04Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks UMF Best Practice</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/112820#M44344</link>
      <description>&lt;P&gt;Hi there, checking back in here. Can someone help provide some feedback on my post?&lt;/P&gt;&lt;P&gt;+&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/23233"&gt;@NandiniN&lt;/a&gt;&amp;nbsp;/&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97998"&gt;@raphaelblg&lt;/a&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Mar 2025 16:14:07 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/112820#M44344</guid>
      <dc:creator>ChristianRRL</dc:creator>
      <dc:date>2025-03-17T16:14:07Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks UMF Best Practice</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/114502#M44844</link>
      <description>&lt;P&gt;Hello Mate,&lt;/P&gt;&lt;P&gt;I tried similar approach in my workspace but not sure how it might help you, just sharing my approach :&amp;nbsp;&lt;/P&gt;&lt;P&gt;There is a google spread sheet maintained by GTM/Sales team and always changes will be on the sheet by updating many fields daily, they would like to see the analytics and few metric on this data. I proposed the solution that based on their first timeline like early in the morning data report then I will refresh the my job at 30 mints before using the API read ( google spread sheet api), based on this we are refreshing every 12 hrs. You can ignore if this not help.&lt;/P&gt;&lt;P&gt;Thank you for the ask.&lt;/P&gt;</description>
      <pubDate>Fri, 04 Apr 2025 11:01:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/114502#M44844</guid>
      <dc:creator>saisaran_g</dc:creator>
      <dc:date>2025-04-04T11:01:50Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks UMF Best Practice</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/115181#M45041</link>
      <description>&lt;DIV class="paragraph"&gt;I am not an expert on this topic or Azure services but I did some research and have some suggested courses of action for you to test out.&amp;nbsp; To address your request for suggested ways to get User Managed Files (UMF) from Azure into Databricks, here are some key approaches and perspectives based on the context and search results:&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Suggested Approaches for Ingesting UMF Data: 1. &lt;STRONG&gt;Microsoft Graph API&lt;/STRONG&gt;: - Using the Microsoft Graph API for retrieving user-generated content, such as files from OneDrive, SharePoint, and Teams, is a viable option. - Challenges specific to API usage in serverless environments were noted in Slack discussions, including cases where API calls from within Databricks mapInPandas functions resulted in tasks stalling. For example, the discussion highlighted that serverless clusters may have restricted outbound internet connectivity by default. You can reference the details mentioned in Slack threads for more context (&lt;A href="https://databricks.slack.com/archives/C0463EUAM7H/p1738036880636039" target="_blank"&gt;link&lt;/A&gt;).&lt;/DIV&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Azure Data Factory (ADF)&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;ADF can be used to ingest files from Azure services directly into Azure Data Lake Storage (ADLS), from which Databricks can pick up UMF data. ADF supports connectors for extracting from sources like SharePoint or OneDrive and can be combined with Databricks for further processing.&lt;/LI&gt;
&lt;LI&gt;For such UMF ingestion, setting up data pipelines or configuring triggers within ADF can automate the ETL process. ADF’s OData connectors might support tasks involving changing files.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Databricks Auto Loader&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;Auto Loader can be an effective tool to ingest dynamically changing UMF files into Delta Lake. It supports continuous file monitoring in ADLS or Blob storage and is highly scalable for workflows where users upload new files regularly.&lt;/LI&gt;
&lt;LI&gt;For example: &lt;CODE&gt;python
df = spark.readStream.format("cloudFiles") \
    .option("cloudFiles.format", "csv") \
    .load("path_to_azure_storage")
&lt;/CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Unity Catalog with External Tables or Volumes&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;Depending on how the files are stored, external locations or volumes registered in Unity Catalog can manage access and governance for UMF data. You might also consider using Volumes to provide a space for handling raw UMF data before processing.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Integration Using Python Libraries&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;Databricks supports accessing Azure storage layers through standard libraries like Azure SDK. This facilitates scripting for downloading/uploading dynamic files (e.g., CSV) into Databricks workspaces.&lt;/LI&gt;
&lt;LI&gt;Example Python code to fetch files: ```python from azure.identity import DefaultAzureCredential from azure.storage.blob import BlobServiceClient&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;blob_service_client = BlobServiceClient(account_url="https://&amp;lt;storage_acct&amp;gt;.blob.core.windows.net", credential=DefaultAzureCredential()) container_client = blob_service_client.get_container_client("container_name")&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;blob_list = container_client.list_blobs() for blob in blob_list: # Logic for file filtering based on modification download_stream = container_client.download_blob(blob.name) with open(blob.name, "wb") as file: file.write(download_stream.readall()) ```&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="paragraph"&gt;Recommendations: - &lt;STRONG&gt;Best Method Depends on Use Case&lt;/STRONG&gt;: If your UMF data resides in OneDrive or SharePoint, leveraging the Graph API might be one of the better options, provided that any potential bottlenecks (e.g., serverless tasks) are resolved. For ADLS or Blob storage, Auto Loader and Databricks-native integration tools offer streamlined solutions. - &lt;STRONG&gt;Consider Governance, Scalability &amp;amp; Security&lt;/STRONG&gt;: Ensure clear access policies for sensitive modifications and utilize Azure features like Private Link or ADLS Gen2-specific mechanisms. - &lt;STRONG&gt;Continuous Improvement&lt;/STRONG&gt;: If initial tests indicate performance or reliability issues, explore tools such as Azure Data Factory or third-party solutions like Qlik Replicate, which has demonstrated strong integration capabilities with Databricks and Azure ecosystems.&lt;/DIV&gt;</description>
      <pubDate>Thu, 10 Apr 2025 15:47:21 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-umf-best-practice/m-p/115181#M45041</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-04-10T15:47:21Z</dc:date>
    </item>
  </channel>
</rss>

