cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

all-purpose compute for Oracle queries

ElaPG1
New Contributor

Hi,

I am looking for any guidelines, best practices regarding compute configuration for extracting data from Oracle db and saving it as parquet files. Right now I have a DBR workflow with for each task, concurrency = 31 (as I need to copy the data from 31 tables). I use Standard_D8s_v5 for both - worker and driver (32GB memory, 8 cores, min workers 2, max workers 31, enable autoscaling - checked). It takes over 1,5h to save the result from all 31 tables.

Any ideas what could potentially speed up the process?

1 REPLY 1

NandiniN
Databricks Employee
Databricks Employee

Hi @ElaPG1 ,

While the cluster sounds like a pretty good one with Autoscaling, it depends on the workload too.

  • The Standard_D8s_v5 instances you are using have 32GB memory and 8 cores. While these are generally good, you might want to experiment with different instance types that offer a better balance of CPU and memory for your specific workload. For example, instances with higher memory might help if your tasks are memory-intensive.
  • Adjust the batch size for data extraction from Oracle. Larger batch sizes can reduce the number of round trips to the database, but they also require more memory.
  • Ensure that the data extraction process is parallelized effectively. Use multiple connections to the Oracle database to extract data from different tables simultaneously.
  • Check if you are setting the appropriate block sizes and compression codecs. For example, using snappy compression can speed up the writing process.
  • Partition the data appropriately when writing to Parquet files. This can improve both the writing and subsequent reading performance.

I would also suggest you to go to the Spark UI and understand the stage/task which is taking more time, what operation it is performing and check the metrics both on the DAG, with additional metrics checkbox on the Spark UI. 

You could also see the memory and CPU utilization.

Thanks!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group