cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

parallelizing function call in databricks

Shivanshu_
New Contributor III

I have a use case where I have to process stream data and have to create categorical table's(500 table count). I'm using concurrent threadpools to parallelize the whole process, but while seeing the spark UI, my code dosen't utilizes all the workers(Cluster configuration: Standard_e8ads type for both driver and worker, and 4 workers 32gb memory and 4 cores each). I'm using 4 threads.

the code sometimes executes on the driver or the worker, I never get utilization more than 40 to 45% for 5million records.

The function I call using threadpool has all spark code in it.

Any help on the issue will be highly appriciated, and thanks in advance.

4 REPLIES 4

-werners-
Esteemed Contributor III

AFAIK threadpool works on a single machine.  So by using it you cannot scale out to multiple nodes.
These tables you are talking about, are these spark tables or from a database?

Shivanshu_
New Contributor III

Spark tables

-werners-
Esteemed Contributor III

why not creating a single table with 500 partitions?
If that is not an option, you could still write the data as a partitioned parquet file and then create tables out of each partition using a small python script.

jose_gonzalez
Moderator
Moderator

You can use DLT, read from many-to-one table.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.