Sunday
I am trying to bring Oracle Fusion (SCM, HCM, Finance) Data and push to ADLS Gen2. Databricks used for Data Transformation and Powerbi used for Reports Visualization.
I have 3 Option.
Option 1 :
Option 2 :
Option 3
May someone please help me which is best enterprise cost effective approach and why. Or any Other way to achieve this effectively.
Thanks a lot
Monday
My preference is option 1
Delta Sharing is the most efficient and secure integration between Databricks and external systems.
No JDBC bottlenecks (no long-running queries, no network saturation).
Data shared as Delta format, which is natively optimized for Databricks.
Lower operational overhead โ Databricks reads the Delta Shares directly.
Good for large volumes (Finance, SCM, HCM typically generate big datasets).
Strong governance and lineage support.
Also I don't like to use JDBC, I avoid using it unless there are no other options
Not scalable for large Oracle Fusion workloads.
JDBC pulls are:
slow
stateful
prone to timeouts
difficult to parallelize
expensive for large history loads
High latency for production-grade pipelines.
You must manage incremental logic manually (ROWIDs, timestamps, etc.).
Monday
@Raman_Unifeye this one is for you ๐
Monday
My preference is option 1
Delta Sharing is the most efficient and secure integration between Databricks and external systems.
No JDBC bottlenecks (no long-running queries, no network saturation).
Data shared as Delta format, which is natively optimized for Databricks.
Lower operational overhead โ Databricks reads the Delta Shares directly.
Good for large volumes (Finance, SCM, HCM typically generate big datasets).
Strong governance and lineage support.
Also I don't like to use JDBC, I avoid using it unless there are no other options
Not scalable for large Oracle Fusion workloads.
JDBC pulls are:
slow
stateful
prone to timeouts
difficult to parallelize
expensive for large history loads
High latency for production-grade pipelines.
You must manage incremental logic manually (ROWIDs, timestamps, etc.).
Monday
Option-1 using Oracle's Bulk extraction utility BICC. It can directly export the extracted data files (typically CSV) to Oracle cloud storage destination and then you could use ADF to get it copied over to ADLS.
yesterday - last edited yesterday
BICC is suitable for certain use cases, but it has several limitations and is not particularly user-friendly. BICC uses PVOs which cause a huge operational gap among the users:
IT/DW teams: Its a multi-hop(BICC-file system-OCI-ADW-delta share) process which is brittle and fixing any breaks is cumbersome. You'll spend more time fixing things than actually getting value from data.
BI & Data analysts: Try to consume data via PVOs, struggle to understand schema, deal with missing or too many fields. Creating dashboards require back & forth and increases time to data.
Business users: Struggle to find actionable insights in dashboards, try to make the best judgment, leading to inaccurate decisions, unresolved problems, and missed opportunities
You are also spending a lot of money on OCI, ADW et al.
Check out BI Connector(https://www.biconnector.com/oracle-fusion-data-warehouse-integration) which offers a more direct and cost-efficient approach to bring your Oracle Fusion to your Lakehouse/Data Warehouse without any of the above shortcomings. Another advantage is that BI Connector allows you to directly bring Oracle Fusion data into Power BI(https://www.biconnector.com/powerbi-oracle-fusion-connector/) without any intermediate DW/Lakehouse also. You get a 2-in-1 solution.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now