<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Uploading file to volume and start ingestion job in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/155468#M54253</link>
    <description>&lt;P&gt;Hello Community!&lt;BR /&gt;&lt;BR /&gt;I am writing to you with my idea about data ingestion job which we have to implement in our project.&lt;/P&gt;&lt;P&gt;The data which we have are in CSV file format and depending on the case it differs a little bit. Before uploading we pivoting csv files to have unified schema. Currently we use github actions to copy the data to volume and when all files are copied we start ingestion job. The same can be done via manual upload, a running the job manually.&lt;/P&gt;&lt;P&gt;Ingestion job is responsible for validation, data transformation (let's say normalization) and data merge into final table.&amp;nbsp;&lt;/P&gt;&lt;P&gt;We would like to automate our pipeline as much as we can. What we think of first is to use auto job run as soon as new files are added to the volume. However is there a possiblity to know what files have been uploaded? As far as I browsed through the documentation it seems, it is not? So I guess we have to create something like audit table to verify what files have been already uploaded, correct?&lt;/P&gt;&lt;P&gt;If you have any suggestions how to approach this data ingestion in general, I would really be thankful for that!&lt;/P&gt;&lt;P&gt;Thank you very much!&lt;/P&gt;</description>
    <pubDate>Fri, 24 Apr 2026 19:51:37 GMT</pubDate>
    <dc:creator>maikel</dc:creator>
    <dc:date>2026-04-24T19:51:37Z</dc:date>
    <item>
      <title>Uploading file to volume and start ingestion job</title>
      <link>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/155468#M54253</link>
      <description>&lt;P&gt;Hello Community!&lt;BR /&gt;&lt;BR /&gt;I am writing to you with my idea about data ingestion job which we have to implement in our project.&lt;/P&gt;&lt;P&gt;The data which we have are in CSV file format and depending on the case it differs a little bit. Before uploading we pivoting csv files to have unified schema. Currently we use github actions to copy the data to volume and when all files are copied we start ingestion job. The same can be done via manual upload, a running the job manually.&lt;/P&gt;&lt;P&gt;Ingestion job is responsible for validation, data transformation (let's say normalization) and data merge into final table.&amp;nbsp;&lt;/P&gt;&lt;P&gt;We would like to automate our pipeline as much as we can. What we think of first is to use auto job run as soon as new files are added to the volume. However is there a possiblity to know what files have been uploaded? As far as I browsed through the documentation it seems, it is not? So I guess we have to create something like audit table to verify what files have been already uploaded, correct?&lt;/P&gt;&lt;P&gt;If you have any suggestions how to approach this data ingestion in general, I would really be thankful for that!&lt;/P&gt;&lt;P&gt;Thank you very much!&lt;/P&gt;</description>
      <pubDate>Fri, 24 Apr 2026 19:51:37 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/155468#M54253</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-04-24T19:51:37Z</dc:date>
    </item>
    <item>
      <title>Re: Uploading file to volume and start ingestion job</title>
      <link>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/155471#M54254</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192995"&gt;@maikel&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;You don't have to build a custom solution for this.&amp;nbsp;Databricks now has native components that align very well with what you want.&lt;/P&gt;
&lt;P&gt;If you want the job to start as soon as new files land in a volume, the recommended approach is to use file-arrival triggers on a Unity Catalog volume or external location, and have that trigger start your ingestion job or Lakehouse pipeline. You point the trigger at something like /Volumes/&amp;lt;catalog&amp;gt;/&amp;lt;schema&amp;gt;/&amp;lt;volume&amp;gt;/incoming/, and Databricks will poll for new files (roughly once a minute) and fire the job when it sees new arrivals, without needing GitHub Actions to orchestrate that part anymore. See the docs for &lt;SPAN aria-expanded="false" aria-haspopup="dialog" data-base-ui-click-trigger=""&gt;&lt;A href="https://docs.databricks.com/aws/en/jobs/file-arrival-triggers" rel="noreferrer" target="_blank"&gt;file-arrival triggers&lt;/A&gt;&lt;/SPAN&gt; and &lt;SPAN aria-expanded="false" aria-haspopup="dialog" data-base-ui-click-trigger=""&gt;&lt;A href="https://docs.databricks.com/aws/en/files/volumes" rel="noreferrer" target="_blank"&gt;using volumes for ingestion&lt;/A&gt;&lt;/SPAN&gt;.&lt;/P&gt;
&lt;P&gt;For “how do I know which files have been uploaded/processed?”, the key is to lean on Auto Loader rather than rolling your own state tracking. When you read from the volume with spark.readStream.format("cloudFiles") (for example, with cloudFiles.format = "csv"), Auto Loader persists file metadata in its checkpoint and uses that to guarantee that each file is processed exactly once and that the stream can resume safely after failures.... You don’t need a separate audit table just to avoid reprocessing the same file. See &lt;SPAN aria-expanded="false" aria-haspopup="dialog" data-base-ui-click-trigger=""&gt;&lt;A href="https://docs.databricks.com/aws/en/ingestion/cloud-object-storage/auto-loader" rel="noreferrer" target="_blank"&gt;What is Auto Loader?&lt;/A&gt;&lt;/SPAN&gt; and the “How does Auto Loader track ingestion progress?” section there.&lt;/P&gt;
&lt;P&gt;If you want human-readable observability ("which files, when, and in which batch?"), then yes, it’s common to add an ingestion log table on top... either by querying Auto Loader’s cloud_files_state metadata (which stores per-file state including commit_time) or by logging the path column from your stream in a small foreachBatch into a Delta table. That gives you a clean audit trail without owning the low-level dedup logic yourself. The heavy lifting still comes from Auto Loader’s internal state. The relevant options and the cloud_files_state TVF are documented under &lt;SPAN aria-expanded="false" aria-haspopup="dialog" data-base-ui-click-trigger=""&gt;&lt;A href="https://docs.databricks.com/aws/en/ingestion/cloud-object-storage/auto-loader/options" rel="noreferrer" target="_blank"&gt;Auto Loader options&lt;/A&gt;&lt;/SPAN&gt;.&lt;/P&gt;
&lt;P&gt;A robust pattern for your scenario is... land CSVs (from GitHub Actions or manual upload) into a Unity Catalog volume, trigger a job on file arrival, use Auto Loader from that volume into a bronze table, then do your validation/normalisation and any pivoting into silver, and finally MERGE into the final table. This keeps uploads simple, makes ingestion incremental and mostly self-driven, and still lets you add an explicit audit table if you want extra transparency for which files were processed when.&lt;/P&gt;
&lt;P&gt;By the way, are you exporting data to CSV from an upstream system and then uploading it to the volume for any specific reason (governance, network, tooling, etc.)? If you have direct access to the source system, you might also look at pulling data straight into Databricks using Lakeflow Connect instead of going via CSV. Lakeflow Connect provides managed connectors for common SaaS apps and databases, with incremental ingestion into streaming tables, which can remove a lot of custom file-handling logic. See &lt;SPAN aria-expanded="false" aria-haspopup="dialog" data-base-ui-click-trigger=""&gt;&lt;A class="_1ibi0s3e6 markdown-link _1ibi0s376" href="https://docs.databricks.com/aws/en/ingestion/overview" rel="noreferrer" target="_blank"&gt;What is Lakeflow Connect?&lt;/A&gt;&lt;/SPAN&gt; and &lt;SPAN aria-expanded="false" aria-haspopup="dialog" data-base-ui-click-trigger=""&gt;&lt;A class="_1ibi0s3e6 markdown-link _1ibi0s376" href="https://docs.databricks.com/aws/en/ingestion/lakeflow-connect" rel="noreferrer" target="_blank"&gt;Managed connectors in Lakeflow Connect&lt;/A&gt;&lt;/SPAN&gt;. if you are interested to learn more.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;FONT size="2" color="#FF6600"&gt;&lt;STRONG&gt;&lt;I&gt;If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.&lt;/I&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;I&gt;&lt;/I&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 24 Apr 2026 21:33:34 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/155471#M54254</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-04-24T21:33:34Z</dc:date>
    </item>
    <item>
      <title>Re: Uploading file to volume and start ingestion job</title>
      <link>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/156828#M54479</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/216690"&gt;@Ashwin_DSA&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;thank you very much for this! Sorry for the delayed response but I was on a vacation for quite long time. Auto loader seems to be good direction I believe. Btw. Is there a way to run the job as soon as file is uploaded? I assume what you have in mind is to have file arrival trigger on ingestion job and inside this job do:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;bronze_df = (
    spark.readStream
        .format("cloudFiles")
        .option("cloudFiles.format", "json")              # csv, parquet, json, avro, etc.
        .option("cloudFiles.schemaLocation", schema_location)
        .option("cloudFiles.inferColumnTypes", "true")
        .option("cloudFiles.schemaEvolutionMode", "addNewColumns")
        .option("cloudFiles.includeExistingFiles", "true")
        .load(source_volume_path)
        .withColumn("ingest_ts", current_timestamp())
        .withColumn("source_file", input_file_name())
)

# Write stream to bronze table with Trigger.AvailableNow
query = (
    bronze_df.writeStream
        .format("delta")
        .option("checkpointLocation", checkpoint_path)
        .option("mergeSchema", "true")
        .outputMode("append")
        .trigger(availableNow=True)                       # process all new files, then stop
        .toTable(target_table)
)&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;What if we would like to have job which is run immediately after file is uploaded (without 60 sec wait)? I assume that only one approach is to have this job running constantly and in python code use&amp;nbsp;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;trigger&lt;SPAN class=""&gt;(&lt;/SPAN&gt;processingTime&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN class=""&gt;"30 seconds"&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt;&amp;nbsp;to process changes every 30 sec, correct?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2026 14:23:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/156828#M54479</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-05-13T14:23:04Z</dc:date>
    </item>
    <item>
      <title>Re: Uploading file to volume and start ingestion job</title>
      <link>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/156846#M54482</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192995"&gt;@maikel&lt;/a&gt;,&lt;/P&gt;
&lt;P data-pm-slice="1 1 []"&gt;Not exactly. If you're using a Databricks &lt;A href="https://docs.databricks.com/aws/en/jobs/file-arrival-triggers" rel="noopener noreferrer nofollow" target="_blank"&gt;file arrival trigger&lt;/A&gt;, it doesn't fire instantly when a file is uploaded. It makes a best-effort check roughly every minute, so it's better to think of it as near-real-time rather than immediate execution. In that setup, the usual pattern is to let the file arrival trigger start the job, and then use Auto Loader inside the job with trigger(availableNow=True) so it processes everything that has arrived since the last run and then exits cleanly.&lt;/P&gt;
&lt;P&gt;If you need lower latency than that, then yes, you're generally moving away from a file-arrival-triggered batch pattern and into a long-running streaming workload. That said, I wouldn't position trigger(processingTime="30 seconds") as the only option, or even the default recommendation. Databricks recommends &lt;A href="https://docs.databricks.com/aws/en/jobs/file-arrival-triggers" rel="noopener noreferrer nofollow" target="_blank"&gt;file arrival triggers&lt;/A&gt; for event-driven pipelines, and if you do use time-based streaming triggers, the guidance is to start at around &lt;A href="https://docs.databricks.com/aws/en/ingestion/cloud-object-storage/auto-loader/file-events-explained#configure-appropriate-intervals-with-continuous-triggers" rel="noopener noreferrer nofollow" target="_blank"&gt;1 minute or higher&lt;/A&gt;. For very latency-sensitive use cases, Databricks also suggests considering the classic file notification mode, since managed file events add an extra caching hop that can increase latency slightly.&lt;/P&gt;
&lt;P&gt;Hope this helps.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;FONT size="2" color="#FF6600"&gt;&lt;STRONG&gt;&lt;I&gt;If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.&lt;/I&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;I&gt;&lt;/I&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2026 15:59:32 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/156846#M54482</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-05-13T15:59:32Z</dc:date>
    </item>
    <item>
      <title>Re: Uploading file to volume and start ingestion job</title>
      <link>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/156849#M54484</link>
      <description>&lt;P&gt;Yeah, understood. Thank you very much once again!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2026 16:29:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/uploading-file-to-volume-and-start-ingestion-job/m-p/156849#M54484</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-05-13T16:29:15Z</dc:date>
    </item>
  </channel>
</rss>

