cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Spark Streaming Loading 1kto 5k rows only delta table

bunny1174
New Contributor

Hi Team,

I have 4-5 millions of files in s3 files around 1.5gb data only with 9 million records, when i try to use autoloader to read the data using read stream and writing to delta table the processing is taking too much time, it is loading from 1k to 5k rows max per batch...

code is like below
input_path is s3 folder

df_stream = (
spark.readStream
.format("cloudFiles")
.option("cloudFiles.format", "json")
.option("cloudFiles.schemaLocation", f"{checkpoint_path}/schema/")
.option("cloudFiles.includeExistingFiles", "true")
.option("cloudFiles.fetchParallelism", "32")
.option("cloudFiles.maxFilesPerTrigger", 50000) # Adjust as needed
.option("cloudFiles.maxBytesPerTrigger", "10g") # Adjust as needed
.load(input_path)
)


# Write to Delta Table (append)
stream_query = (
df_stream.writeStream
.format("delta")
.option("checkpointLocation", checkpoint_path)
.outputMode("append")
.trigger(availableNow=True)
.toTable(delta_table)
)
any suggestions please to modify 
2 REPLIES 2

szymon_dybczak
Esteemed Contributor III

Hi @bunny1174 ,

You have 4-5 millions of files in s3 and their size is 1.5gb - this clearly indicates small files problem. You need compact those files to bigger size. There's no way your pipeline will be performant if you have such many files and theirs size is around 1-2kb.

You can read about this problem in general at following articles:

Breaking the Big Data Bottleneck: Solving Spark’s “Small Files” Problem

Tackling the Small Files Problem in Apache Spark | by Henkel Data & Analytics | Henkel Data & Analyt...

Spark Small Files Problem: Optimizing Data Processing

Prajapathy_NKR
New Contributor II

@bunny1174 

It is a common issue that small files gets created during streaming. 

Since you are using delta file format, I would suggest two solutions,

1. try using Liquid clustering. This does auto compact of small files into a bigger chuck mostly of 1 gb.

2. try to run "Optimize" command on you delta table. It also helps to resolve this issue.

Hope it helps.