cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How to use cloudFiles to completely overwrite the target

BradSheridan
Valued Contributor

Hey there Community!! I have a client that will produce a CSV file daily that needs to be moved from Bronze -> Silver. Unfortunately, this source file will always be a full set of data....not incremental. I was thinking of using AutoLoader/cloudFiles to take advantage of the checkpointLocation and will just do Trigger Once. However, I need to ensure that all of the parquet files in the Silver S3 bucket are completely deleted/overwritten each run. What is the .option to use in .writeStream to do this?

1 ACCEPTED SOLUTION

Accepted Solutions

-werners-
Esteemed Contributor III
9 REPLIES 9

-werners-
Esteemed Contributor III

Thanks @Werner Stinckensโ€‹...yes, that would work if I was using Delta, but I'm using .writeStream.format('parquet') and get the error "Data source parquet does not support Complete output mode". The reason I'm not using Delta is b/c once the parquet files are written to S3, I would then crawl them with AWS Glue. I guess the alternative is to use Delta as the output, do .outputMode("complete"), and then just create a manifest file for Athena queries and skip the Glue crawler?

-werners-
Esteemed Contributor III

or use good old batch instead of streaming?

yeah, I tried that initially but the issue is that csv1 will be processed into the Silver bucket...all good here. Then the next day csv2 will land in the same Bronze S3 bucket as csv1, and it will have all the rows from csv1 and possibly some new data or updated data. Next time the batch runs, it will read both of these files therefore duplicating data in Silver. So that's why I tried AutoLoader...because it keeps track of which files it has already processed in the source. i'm getting ready to try the manifest file idea now....more soon

-werners-
Esteemed Contributor III

if you have any influence on the name of the incoming file (or the location), you could add a date to the filename or put them in a subdir yyyy/mm/dd. That is how I organize my bronze.

hmmm @Werner Stinckensโ€‹....I had even thought about the most obvious/easiest approach. Love that! Will keep this thread posted on my outcome

thanks!

BradSheridan
Valued Contributor

I "up voted'" all of @werners suggestions b/c they are all very valid ways of addressing my need (the true power/flexibility of the Databricks UDAP!!!). However, turns out I'm going to end up getting incremental data afterall :). So now the flow will go like this: Salesforce -> AWS AppFlow -> S3 Bronze -> Databricks DLT w/AutoLoader -> S3 Silver. thanks again @werners !

Hi @Brad Sheridanโ€‹,

Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.

Morning Jose. I just marked the first answer as best just now. thanks!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group