cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

parallelizing function call in databricks

Shivanshu_
Contributor

I have a use case where I have to process stream data and have to create categorical table's(500 table count). I'm using concurrent threadpools to parallelize the whole process, but while seeing the spark UI, my code dosen't utilizes all the workers(Cluster configuration: Standard_e8ads type for both driver and worker, and 4 workers 32gb memory and 4 cores each). I'm using 4 threads.

the code sometimes executes on the driver or the worker, I never get utilization more than 40 to 45% for 5million records.

The function I call using threadpool has all spark code in it.

Any help on the issue will be highly appriciated, and thanks in advance.

4 REPLIES 4

-werners-
Esteemed Contributor III

AFAIK threadpool works on a single machine.  So by using it you cannot scale out to multiple nodes.
These tables you are talking about, are these spark tables or from a database?

Spark tables

-werners-
Esteemed Contributor III

why not creating a single table with 500 partitions?
If that is not an option, you could still write the data as a partitioned parquet file and then create tables out of each partition using a small python script.

jose_gonzalez
Databricks Employee
Databricks Employee

You can use DLT, read from many-to-one table.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group