ā10-16-2021 02:16 AM
I am trying to import a table from oracle which has around 1.3 mill rows and one of the column is a Blob, the total size of data on oracle is around 250+ GB. read and save to S3 as delta table is taking around 60 min. I tried with parallel(200 threads) read using JDBC. Still its taking more time.
Appreciate your valuable suggestions to speed up the process
ā10-17-2021 11:56 AM
Hello, @Rama Krishna Nā - My name is Piper and I'm one of the community moderators. Thanks for your question. Let's give it a bit longer to see what the community says. Thank you for your patience.
ā10-20-2021 01:04 AM
Can you check the parallel threads and confirm if the read or write operation is slower? Read operation slowness can be caused because of network issues or concurrency issues on the database.
ā10-20-2021 01:28 AM
Thanks @Ashwinkumar Jayakumarā for reply. I had tried with dataFrame.count, it didn't take much time. Please suggest if there is any other best approach to check read operation slowness.
ā10-20-2021 01:36 AM
Hello @Rama Krishna Nā - We will need to check the task on the Spark UI to validate if the operation is a read from oracle database or write into S3.
The task should show the specific operation on the UI.
Also, the active threads on the Spark UI will show if the specific operation is a database operation.
ā10-16-2024 07:21 AM
Any update on this topic what should be the best option to read from oracle and write in ADLS.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonāt want to miss the chance to attend and share knowledge.
If there isnāt a group near you, start one and help create a community that brings people together.
Request a New Group