<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Data Pipeline for Bringing Data from Oracle Fusion to Azure Databricks in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139378#M51180</link>
    <description>&lt;P&gt;My preference is option 1&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;&lt;STRONG&gt;Delta Sharing is the most efficient and secure integration&lt;/STRONG&gt; between Databricks and external systems.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;No JDBC bottlenecks (no long-running queries, no network saturation).&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Data shared as &lt;STRONG&gt;Delta format&lt;/STRONG&gt;, which is natively optimized for Databricks.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Lower operational overhead — Databricks reads the Delta Shares directly.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Good for &lt;STRONG&gt;large volumes&lt;/STRONG&gt; (Finance, SCM, HCM typically generate big datasets).&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Strong &lt;STRONG&gt;governance and lineage&lt;/STRONG&gt; support.&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Also I don't like to use JDBC, I avoid using it unless there are no other options&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;&lt;STRONG&gt;Not scalable for large Oracle Fusion workloads.&lt;/STRONG&gt;&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;JDBC pulls are:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;slow&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;stateful&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;prone to timeouts&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;difficult to parallelize&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;expensive for large history loads&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;High latency for production-grade pipelines.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;You must manage incremental logic manually (ROWIDs, timestamps, etc.).&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 17 Nov 2025 15:17:07 GMT</pubDate>
    <dc:creator>bianca_unifeye</dc:creator>
    <dc:date>2025-11-17T15:17:07Z</dc:date>
    <item>
      <title>Data Pipeline for Bringing Data from Oracle Fusion to Azure Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139228#M51117</link>
      <description>&lt;P&gt;I am trying to bring Oracle Fusion (SCM, HCM, Finance) Data and push to ADLS Gen2. Databricks used for Data Transformation and Powerbi used for Reports Visualization.&lt;/P&gt;&lt;P&gt;I have 3 Option.&lt;/P&gt;&lt;P&gt;Option 1 :&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Option1.png" style="width: 962px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/21735iE97B0627E2A890A8/image-size/large?v=v2&amp;amp;px=999" role="button" title="Option1.png" alt="Option1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Option 2 :&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Option2.png" style="width: 999px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/21736i411EA24084882F15/image-size/large?v=v2&amp;amp;px=999" role="button" title="Option2.png" alt="Option2.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Option 3&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Option3.png" style="width: 990px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/21737i66F020B69DB94A33/image-size/large?v=v2&amp;amp;px=999" role="button" title="Option3.png" alt="Option3.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;May someone&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;please help me which is best enterprise cost effective approach and why&lt;/STRONG&gt;. Or any Other way to achieve this effectively.&lt;/P&gt;&lt;P&gt;Thanks a lot&lt;/P&gt;</description>
      <pubDate>Sun, 16 Nov 2025 16:19:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139228#M51117</guid>
      <dc:creator>Pratikmsbsvm</dc:creator>
      <dc:date>2025-11-16T16:19:41Z</dc:date>
    </item>
    <item>
      <title>Re: Data Pipeline for Bringing Data from Oracle Fusion to Azure Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139377#M51179</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/179607"&gt;@Raman_Unifeye&lt;/a&gt;&amp;nbsp;this one is for you&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":beaming_face_with_smiling_eyes:"&gt;😁&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Nov 2025 15:13:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139377#M51179</guid>
      <dc:creator>bianca_unifeye</dc:creator>
      <dc:date>2025-11-17T15:13:11Z</dc:date>
    </item>
    <item>
      <title>Re: Data Pipeline for Bringing Data from Oracle Fusion to Azure Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139378#M51180</link>
      <description>&lt;P&gt;My preference is option 1&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;&lt;STRONG&gt;Delta Sharing is the most efficient and secure integration&lt;/STRONG&gt; between Databricks and external systems.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;No JDBC bottlenecks (no long-running queries, no network saturation).&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Data shared as &lt;STRONG&gt;Delta format&lt;/STRONG&gt;, which is natively optimized for Databricks.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Lower operational overhead — Databricks reads the Delta Shares directly.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Good for &lt;STRONG&gt;large volumes&lt;/STRONG&gt; (Finance, SCM, HCM typically generate big datasets).&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Strong &lt;STRONG&gt;governance and lineage&lt;/STRONG&gt; support.&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Also I don't like to use JDBC, I avoid using it unless there are no other options&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;&lt;STRONG&gt;Not scalable for large Oracle Fusion workloads.&lt;/STRONG&gt;&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;JDBC pulls are:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;slow&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;stateful&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;prone to timeouts&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;difficult to parallelize&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;expensive for large history loads&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;High latency for production-grade pipelines.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;You must manage incremental logic manually (ROWIDs, timestamps, etc.).&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Nov 2025 15:17:07 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139378#M51180</guid>
      <dc:creator>bianca_unifeye</dc:creator>
      <dc:date>2025-11-17T15:17:07Z</dc:date>
    </item>
    <item>
      <title>Re: Data Pipeline for Bringing Data from Oracle Fusion to Azure Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139380#M51181</link>
      <description>&lt;P&gt;Option-1 using Oracle's Bulk extraction utility BICC.&amp;nbsp;It can directly export the extracted data files (typically CSV) to Oracle&amp;nbsp;&lt;SPAN&gt;cloud storage destination and then you could use ADF to get it copied over to ADLS.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Nov 2025 15:25:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139380#M51181</guid>
      <dc:creator>Raman_Unifeye</dc:creator>
      <dc:date>2025-11-17T15:25:55Z</dc:date>
    </item>
    <item>
      <title>Re: Data Pipeline for Bringing Data from Oracle Fusion to Azure Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139847#M51309</link>
      <description>&lt;P&gt;BICC is suitable for certain use cases, but it has several limitations and is not particularly user-friendly.&amp;nbsp; BICC uses PVOs&amp;nbsp;&lt;SPAN&gt;which cause a huge operational gap among the users:&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;IT/DW teams: Its a multi-hop(BICC-file system-OCI-ADW-delta share) process which is brittle and fixing any breaks is cumbersome.&amp;nbsp; You'll spend more time fixing things than actually getting value from data.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;BI &amp;amp; Data analysts:&amp;nbsp; Try to consume data via PVOs, struggle to understand schema, deal with missing or too many fields. Creating dashboards require back &amp;amp; forth and increases time to data.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Business users:&amp;nbsp; Struggle to find actionable insights in dashboards, try to make the best judgment, leading to inaccurate decisions, unresolved problems, and missed opportunities&lt;BR /&gt;You are also spending a lot of money on OCI, ADW et al.&amp;nbsp;&amp;nbsp;&lt;BR /&gt;Check out BI Connector(&lt;A href="http://www.biconnector.com" target="_blank" rel="noopener"&gt;https://www.biconnector.com/oracle-fusion-data-warehouse-integration&lt;/A&gt;) which offers a more direct and cost-efficient approach to bring your Oracle Fusion to your Lakehouse/Data Warehouse without any of the above shortcomings.&amp;nbsp; &amp;nbsp;Another advantage is that BI Connector allows you to directly bring Oracle Fusion data into Power BI(&lt;A href="https://www.biconnector.com/powerbi-oracle-fusion-connector/" target="_blank"&gt;https://www.biconnector.com/powerbi-oracle-fusion-connector/&lt;/A&gt;) without any intermediate DW/Lakehouse also.&amp;nbsp; You get a 2-in-1 solution.&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 21:01:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/data-pipeline-for-bringing-data-from-oracle-fusion-to-azure/m-p/139847#M51309</guid>
      <dc:creator>Shankar-Raj</dc:creator>
      <dc:date>2025-11-20T21:01:41Z</dc:date>
    </item>
  </channel>
</rss>

