cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How to use cloudFiles to completely overwrite the target

BradSheridan
Valued Contributor

Hey there Community!! I have a client that will produce a CSV file daily that needs to be moved from Bronze -> Silver. Unfortunately, this source file will always be a full set of data....not incremental. I was thinking of using AutoLoader/cloudFiles to take advantage of the checkpointLocation and will just do Trigger Once. However, I need to ensure that all of the parquet files in the Silver S3 bucket are completely deleted/overwritten each run. What is the .option to use in .writeStream to do this?

1 ACCEPTED SOLUTION

Accepted Solutions
9 REPLIES 9

-werners-
Esteemed Contributor III

Thanks @Werner Stinckens​...yes, that would work if I was using Delta, but I'm using .writeStream.format('parquet') and get the error "Data source parquet does not support Complete output mode". The reason I'm not using Delta is b/c once the parquet files are written to S3, I would then crawl them with AWS Glue. I guess the alternative is to use Delta as the output, do .outputMode("complete"), and then just create a manifest file for Athena queries and skip the Glue crawler?

-werners-
Esteemed Contributor III

or use good old batch instead of streaming?

yeah, I tried that initially but the issue is that csv1 will be processed into the Silver bucket...all good here. Then the next day csv2 will land in the same Bronze S3 bucket as csv1, and it will have all the rows from csv1 and possibly some new data or updated data. Next time the batch runs, it will read both of these files therefore duplicating data in Silver. So that's why I tried AutoLoader...because it keeps track of which files it has already processed in the source. i'm getting ready to try the manifest file idea now....more soon

-werners-
Esteemed Contributor III

if you have any influence on the name of the incoming file (or the location), you could add a date to the filename or put them in a subdir yyyy/mm/dd. That is how I organize my bronze.

hmmm @Werner Stinckens​....I had even thought about the most obvious/easiest approach. Love that! Will keep this thread posted on my outcome

thanks!

BradSheridan
Valued Contributor

I "up voted'" all of @werners suggestions b/c they are all very valid ways of addressing my need (the true power/flexibility of the Databricks UDAP!!!). However, turns out I'm going to end up getting incremental data afterall :). So now the flow will go like this: Salesforce -> AWS AppFlow -> S3 Bronze -> Databricks DLT w/AutoLoader -> S3 Silver. thanks again @werners !

Hi @Brad Sheridan​,

Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.

Morning Jose. I just marked the first answer as best just now. thanks!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.