cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Read and saving Blob data from oracle to databricks S3 is slow

RKNutalapati
Valued Contributor

I am trying to import a table from oracle which has around 1.3 mill rows and one of the column is a Blob, the total size of data on oracle is around 250+ GB. read and save to S3 as delta table is taking around 60 min. I tried with parallel(200 threads) read using JDBC. Still its taking more time.

Appreciate your valuable suggestions to speed up the process

4 REPLIES 4

Anonymous
Not applicable

Hello, @Rama Krishna N​ - My name is Piper and I'm one of the community moderators. Thanks for your question. Let's give it a bit longer to see what the community says. Thank you for your patience.

User16829050420
New Contributor III
New Contributor III

Can you check the parallel threads and confirm if the read or write operation is slower? Read operation slowness can be caused because of network issues or concurrency issues on the database.

Thanks @Ashwinkumar Jayakumar​ for reply. I had tried with dataFrame.count, it didn't take much time. Please suggest if there is any other best approach to check read operation slowness.

User16829050420
New Contributor III
New Contributor III

Hello @Rama Krishna N​ - We will need to check the task on the Spark UI to validate if the operation is a read from oracle database or write into S3.

The task should show the specific operation on the UI.

Also, the active threads on the Spark UI will show if the specific operation is a database operation.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.