cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Best practices for initial large-scale ingestion from onโ€‘premises Oracle to Databricks

faruko
New Contributor II

Hello everyone,

I am responsible for designing and implementing a Lakehouse architecture in an industrial company.
I am currently facing some challenges regarding the initial ingestion of data from our onโ€‘premise Oracle database into Databricks.

The data comes from production systems and is actively used by several applications. My main concern is that the initial load is very large, and Iโ€™m worried about impacting database performance or even causing issues if we extract all the data at once.

For the ongoing ingestion, the data volume will be much smaller and continuous, so that part is not an issue.
However, I would really appreciate advice or best practices on how to safely handle the first largeโ€‘scale ingestion (initial load) without overloading or disrupting the Oracle database.

What approaches, tools, or patterns would you recommend in this situation?

Thank you in advance for your help.

2 REPLIES 2

szymon_dybczak
Esteemed Contributor III

Hi @faruko ,

You can split  split initial load using partitioned reads. We did that approach in one of projects. So instead doing something like this:

SELECT * FROM large_table

You can do that:

SELECT *
FROM table
WHERE id BETWEEN 0 AND 1,000,000

With that approach you can even stop and resume loading process if you implement it correctly. Also, the best time to load data initially from database is at night where there is limited number of active users/queries.

Thank you for your suggestion.

Unfortunately, we do not have a unique incremental ID. Our data is identified by multiple tag_ids, with one record per tag every minute, based on a timestamp.

We initially considered using spark.readStream to load historical data month by month during low-usage periods (e.g. weekends), but we are not certain whether changing the ingestion frequency afterwards to continuous would be compatible with checkpointing and state tracking.