07-27-2022 06:13 AM
Hey there Community!! I have a client that will produce a CSV file daily that needs to be moved from Bronze -> Silver. Unfortunately, this source file will always be a full set of data....not incremental. I was thinking of using AutoLoader/cloudFiles to take advantage of the checkpointLocation and will just do Trigger Once. However, I need to ensure that all of the parquet files in the Silver S3 bucket are completely deleted/overwritten each run. What is the .option to use in .writeStream to do this?
07-27-2022 06:32 AM
07-27-2022 06:32 AM
07-27-2022 06:54 AM
Thanks @Werner Stinckens...yes, that would work if I was using Delta, but I'm using .writeStream.format('parquet') and get the error "Data source parquet does not support Complete output mode". The reason I'm not using Delta is b/c once the parquet files are written to S3, I would then crawl them with AWS Glue. I guess the alternative is to use Delta as the output, do .outputMode("complete"), and then just create a manifest file for Athena queries and skip the Glue crawler?
07-27-2022 06:55 AM
or use good old batch instead of streaming?
07-27-2022 07:11 AM
yeah, I tried that initially but the issue is that csv1 will be processed into the Silver bucket...all good here. Then the next day csv2 will land in the same Bronze S3 bucket as csv1, and it will have all the rows from csv1 and possibly some new data or updated data. Next time the batch runs, it will read both of these files therefore duplicating data in Silver. So that's why I tried AutoLoader...because it keeps track of which files it has already processed in the source. i'm getting ready to try the manifest file idea now....more soon
07-27-2022 07:13 AM
if you have any influence on the name of the incoming file (or the location), you could add a date to the filename or put them in a subdir yyyy/mm/dd. That is how I organize my bronze.
07-27-2022 07:34 AM
hmmm @Werner Stinckens....I had even thought about the most obvious/easiest approach. Love that! Will keep this thread posted on my outcome
thanks!
08-12-2022 10:44 AM
I "up voted'" all of @werners suggestions b/c they are all very valid ways of addressing my need (the true power/flexibility of the Databricks UDAP!!!). However, turns out I'm going to end up getting incremental data afterall :). So now the flow will go like this: Salesforce -> AWS AppFlow -> S3 Bronze -> Databricks DLT w/AutoLoader -> S3 Silver. thanks again @werners !
08-17-2022 02:03 PM
Hi @Brad Sheridan,
Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.
08-18-2022 05:42 AM
Morning Jose. I just marked the first answer as best just now. thanks!
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group