cancel
Showing results for 
Search instead for 
Did you mean: 
missing-QuestionPost
cancel
Showing results for 
Search instead for 
Did you mean: 

maxFilesPerTrigger not working in bronze to silver layer

sanjay
Valued Contributor II

Hi,

I am using Matillion architecture where autoloader picks files from AWS S3 and saves in delta lake. Next layer picks the changes from delta lake and does some processing. I am able to set batch size in autoloader and its working. But in bronze to silver layer, unable to set batch limit, its picking all files in one go. Here is my code from bronze to silver layer..

(spark.readStream.format("delta")

.option("useNotification","true")

.option("includeExistingFiles","true")

.option("allowOverwrites",True)

.option("ignoreMissingFiles",True)

.option("maxFilesPerTrigger", 100)

.load(bronze_path)

.writeStream

.option("checkpointLocation", silver_checkpoint_path)

.trigger(processingTime="1 minute")

.foreachBatch(foreachBatchFunction)

.start()

)

Appreciate any help.

Regards,

Sanjay

3 REPLIES 3

Anonymous
Not applicable

Hi @Sanjay Jain​ 

Great to meet you, and thanks for your question!

Let's see if your peers in the community have an answer to your question. Thanks.

Lakshay
Databricks Employee
Databricks Employee

Hi @Sanjay Jain​ , Could you try using a fresh checkpoint location if not already tried? Also, could you please check the logs what is the size of the micro batch it is currently processing?

sanjay
Valued Contributor II

Hi Lakshay,

I tried with new checkpoint location but still its not working. Its taking whole data in one go and not respecting batch size mentioned in code.

Regards,

Sanjay

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now