<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hi @arushigulati, Lakebridge (the Databricks Labs project... in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/lakebridge-transpile-to-translate-from-oracle-to-databricks-sql/m-p/150304#M53345</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/213090"&gt;@arushigulati&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;Lakebridge (the Databricks Labs project formerly known as Remorph) does support Oracle as a source dialect for transpilation, but the DDL handling, particularly around constraints like PRIMARY KEY, has some gaps depending on the version you are running. Here is a walkthrough of how to approach this.&lt;/P&gt;
&lt;P&gt;UNDERSTANDING THE PRIMARY KEY PARSING ISSUE&lt;/P&gt;
&lt;P&gt;The Oracle dialect parser in Lakebridge may not fully handle inline PRIMARY KEY constraint definitions in all cases. This is a known area where coverage is still being expanded. If your Oracle DDL includes an inline PRIMARY KEY like this:&lt;/P&gt;
&lt;PRE&gt;CREATE TABLE CUSTOMERS_V1 (
  CUSTOMER_ID NUMBER(10) PRIMARY KEY,
  FIRST_NAME  VARCHAR2(50),
  LAST_NAME   VARCHAR2(50)
);&lt;/PRE&gt;
&lt;P&gt;...and the transpiler fails on the PRIMARY KEY clause, there are a few approaches you can use.&lt;/P&gt;
&lt;P&gt;OPTION 1: PRE-PROCESS YOUR DDL TO SEPARATE CONSTRAINTS&lt;/P&gt;
&lt;P&gt;You can extract the PRIMARY KEY constraints from the CREATE TABLE statements and handle them separately. Convert inline constraints to out-of-line (table-level) constraints before transpiling:&lt;/P&gt;
&lt;PRE&gt;CREATE TABLE CUSTOMERS_V1 (
  CUSTOMER_ID NUMBER(10) NOT NULL,
  FIRST_NAME  VARCHAR2(50),
  LAST_NAME   VARCHAR2(50)
);&lt;/PRE&gt;
&lt;P&gt;Then, after transpilation, add the PRIMARY KEY constraint as a separate ALTER TABLE statement in Databricks SQL:&lt;/P&gt;
&lt;PRE&gt;ALTER TABLE CUSTOMERS_V1 ADD CONSTRAINT pk_customers PRIMARY KEY (CUSTOMER_ID);&lt;/PRE&gt;
&lt;P&gt;Databricks supports informational PRIMARY KEY and FOREIGN KEY constraints on Unity Catalog managed and external tables (Databricks Runtime 11.3 LTS and above). These constraints are not enforced, but they are useful for documentation, query optimization hints, and compatibility with BI tools.&lt;/P&gt;
&lt;P&gt;Documentation: &lt;A href="https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-table-constraint.html" target="_blank"&gt;https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-table-constraint.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;OPTION 2: POST-PROCESS THE TRANSPILED OUTPUT&lt;/P&gt;
&lt;P&gt;Run Lakebridge transpile on your Oracle SQL files, then write a small script to add back any constraints that were dropped during transpilation. For example, you could parse the original Oracle DDL for PRIMARY KEY definitions and generate corresponding ALTER TABLE statements in Databricks SQL syntax.&lt;/P&gt;
&lt;P&gt;A simple Python approach:&lt;/P&gt;
&lt;PRE&gt;import re

oracle_ddl = open("original_oracle.sql").read()
pattern = r'CREATE\s+TABLE\s+(\w+)\s*\(.*?(\w+)\s+\w+.*?PRIMARY\s+KEY'
for match in re.finditer(pattern, oracle_ddl, re.DOTALL | re.IGNORECASE):
    table_name = match.group(1)
    col_name = match.group(2)
    print(f"ALTER TABLE {table_name} ADD CONSTRAINT pk_{table_name.lower()} PRIMARY KEY ({col_name});")&lt;/PRE&gt;
&lt;P&gt;OPTION 3: FILE A GITHUB ISSUE FOR ORACLE PRIMARY KEY SUPPORT&lt;/P&gt;
&lt;P&gt;Since Lakebridge is an open-source Databricks Labs project, you can file a bug or feature request on the GitHub repository:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/databrickslabs/remorph/issues" target="_blank"&gt;https://github.com/databrickslabs/remorph/issues&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;There are existing issues tracking Oracle dialect improvements (for example, issue #2170 for Oracle outer join syntax, and issue #1801 for Oracle DDL data type conversions). Filing an issue specifically for inline PRIMARY KEY constraint parsing in Oracle DDL will help the maintainers prioritize this.&lt;/P&gt;
&lt;P&gt;OPTION 4: USE THE DATABRICKS CLI TRANSPILE COMMAND WITH THE LATEST VERSION&lt;/P&gt;
&lt;P&gt;Make sure you are running the latest version of Lakebridge, as Oracle dialect support has been actively improved. You can install or update via:&lt;/P&gt;
&lt;PRE&gt;pip install databricks-labs-lakebridge --upgrade&lt;/PRE&gt;
&lt;P&gt;Then run the transpile with:&lt;/P&gt;
&lt;PRE&gt;databricks labs lakebridge transpile --source-dialect oracle --input-dir /path/to/oracle/sql --output-dir /path/to/output&lt;/PRE&gt;
&lt;P&gt;Check the output directory for any error reports or partial transpilation results. The tool generates a summary that shows which files transpiled successfully and which had issues.&lt;/P&gt;
&lt;P&gt;KEY POINTS ABOUT DATABRICKS CONSTRAINTS&lt;/P&gt;
&lt;P&gt;Once your DDL is transpiled, keep in mind:&lt;/P&gt;
&lt;P&gt;1. PRIMARY KEY and FOREIGN KEY constraints in Databricks are informational (not enforced). They are supported on Unity Catalog tables.&lt;/P&gt;
&lt;P&gt;2. You can add constraints during CREATE TABLE or via ALTER TABLE:&lt;/P&gt;
&lt;PRE&gt;CREATE TABLE catalog.schema.customers_v1 (
  customer_id INT NOT NULL,
  first_name STRING,
  last_name STRING,
  CONSTRAINT pk_customers PRIMARY KEY (customer_id)
);&lt;/PRE&gt;
&lt;P&gt;3. These constraints are valuable for downstream BI tools, query optimizers, and documentation even though Databricks does not enforce them at write time.&lt;/P&gt;
&lt;P&gt;ADDITIONAL RESOURCES&lt;/P&gt;
&lt;P&gt;Lakebridge documentation: &lt;A href="https://databrickslabs.github.io/lakebridge/" target="_blank"&gt;https://databrickslabs.github.io/lakebridge/&lt;/A&gt;&lt;BR /&gt;
Lakebridge GitHub repository: &lt;A href="https://github.com/databrickslabs/remorph" target="_blank"&gt;https://github.com/databrickslabs/remorph&lt;/A&gt;&lt;BR /&gt;
Databricks SQL constraints reference: &lt;A href="https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-table-constraint.html" target="_blank"&gt;https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-table-constraint.html&lt;/A&gt;&lt;BR /&gt;
Migration guide: &lt;A href="https://docs.databricks.com/en/migration/index.html" target="_blank"&gt;https://docs.databricks.com/en/migration/index.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;I hope this helps you move forward with your Oracle to Databricks migration. If you can share the specific error message you are seeing, I can provide more targeted guidance.&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;</description>
    <pubDate>Mon, 09 Mar 2026 01:10:39 GMT</pubDate>
    <dc:creator>SteveOstrowski</dc:creator>
    <dc:date>2026-03-09T01:10:39Z</dc:date>
    <item>
      <title>Lakebridge transpile to translate from oracle to databricks sql</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-transpile-to-translate-from-oracle-to-databricks-sql/m-p/145662#M52557</link>
      <description>&lt;P&gt;Hi Community,&lt;/P&gt;&lt;P&gt;I am currently working on a PoC to migrate data from &lt;STRONG&gt;Oracle to Databricks&lt;/STRONG&gt;. As part of this, we are attempting to automate the DDL conversion process.&lt;/P&gt;&lt;P&gt;We are leveraging &lt;STRONG&gt;Databricks Labs Lakebridge&lt;/STRONG&gt; for transpilation, but it is failing to convert a standard Oracle DDL. It appears the transpiler is unable to parse the PRIMARY KEY constraint.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;CLI Command Used:&lt;/STRONG&gt;&lt;/P&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;Bash&lt;/SPAN&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;PRE&gt;databricks labs lakebridge transpile \
--source-dialect oracle \
--input-source &lt;SPAN class=""&gt;$INPUT_PATH&lt;/SPAN&gt; \
--output-folder &lt;SPAN class=""&gt;$OUTPUT_PATH&lt;/SPAN&gt; \
--error-file-path &lt;SPAN class=""&gt;$ERROR_PATH&lt;/SPAN&gt;
&lt;/PRE&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&lt;STRONG&gt;Source Oracle DDL:&lt;/STRONG&gt;&lt;/P&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;SQL&lt;/SPAN&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;CREATE&lt;/SPAN&gt; &lt;SPAN class=""&gt;TABLE&lt;/SPAN&gt; CUSTOMERS_V1 (
    CUSTOMER_ID NUMBER &lt;SPAN class=""&gt;NOT&lt;/SPAN&gt; &lt;SPAN class=""&gt;NULL&lt;/SPAN&gt;,
    CUSTOMER_NAME VARCHAR2(&lt;SPAN class=""&gt;100&lt;/SPAN&gt;) &lt;SPAN class=""&gt;NOT&lt;/SPAN&gt; &lt;SPAN class=""&gt;NULL&lt;/SPAN&gt;,
    EMAIL VARCHAR2(&lt;SPAN class=""&gt;100&lt;/SPAN&gt;),
    CITY VARCHAR2(&lt;SPAN class=""&gt;50&lt;/SPAN&gt;),
    &lt;SPAN class=""&gt;PRIMARY&lt;/SPAN&gt; KEY (CUSTOMER_ID)
);&lt;/PRE&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;Error -&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arushigulati_0-1769670207329.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/23388iB68A4CFB5AB11319/image-size/medium?v=v2&amp;amp;px=400" role="button" title="arushigulati_0-1769670207329.png" alt="arushigulati_0-1769670207329.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arushigulati_1-1769670261737.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/23389i20256138ECC8C6C4/image-size/medium?v=v2&amp;amp;px=400" role="button" title="arushigulati_1-1769670261737.png" alt="arushigulati_1-1769670261737.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;- It is unable to parse simple PRIMARY KEY keyword.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Observations:&lt;/STRONG&gt; If I remove the PRIMARY KEY line, the transpilation proceeds, but this isn't ideal for a large-scale migration where we need to maintain schema integrity (even if Primary Keys are only informational in Unity Catalog).&lt;/P&gt;&lt;P&gt;Has anyone else encountered this limitation with the Lakebridge CLI? Is there a specific configuration or a newer dialect flag I should be using to ensure constraints are handled correctly?&lt;/P&gt;&lt;P&gt;Thanks in advance for your help!&lt;/P&gt;</description>
      <pubDate>Thu, 29 Jan 2026 07:11:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-transpile-to-translate-from-oracle-to-databricks-sql/m-p/145662#M52557</guid>
      <dc:creator>arushigulati</dc:creator>
      <dc:date>2026-01-29T07:11:03Z</dc:date>
    </item>
    <item>
      <title>Re: Lakebridge transpile to translate from oracle to databricks sql</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-transpile-to-translate-from-oracle-to-databricks-sql/m-p/145984#M52596</link>
      <description>&lt;P&gt;Hi&amp;nbsp;arushigulati,&lt;/P&gt;&lt;P data-unlink="true"&gt;I encountered the same issues when I was attempting a similar POC between Oracle and Databricks SQL / Python. I found it could cover a good ~80% of what was fed through out of the box but would not always cover the specific SQL flavor or how the original script author formatted their code.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;To get around this I ended up writing a custom configuration for Bladebridge. Within this I would;&lt;BR /&gt;Inherit the original oracle base config that the out of the box product uses (base_oracle2databricks_sql.json) and then write my customization on top.&lt;BR /&gt;&lt;BR /&gt;Based on your specific issue, you could try and follow the section around &lt;A href="https://databrickslabs.github.io/lakebridge/docs/transpile/pluggable_transpilers/bladebridge/bladebridge_configuration/#line_subst" target="_self"&gt;line_subst&lt;/A&gt; for your specific customization need on primary key.&lt;/P&gt;&lt;P data-unlink="true"&gt;Once you are done make sure to re-run the install-transpile command and reference the path of your new config (example from linked documentation)&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;databricks labs lakebridge install-transpile
Do you want to override the existing installation? (default: no): yes
Specify the config file to override the default[Bladebridge] config - press &amp;lt;enter&amp;gt; for none (default: &amp;lt;none&amp;gt;):
my_custom_config.json&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 30 Jan 2026 09:43:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-transpile-to-translate-from-oracle-to-databricks-sql/m-p/145984#M52596</guid>
      <dc:creator>Marc_Gibson96</dc:creator>
      <dc:date>2026-01-30T09:43:16Z</dc:date>
    </item>
    <item>
      <title>Hi @arushigulati, Lakebridge (the Databricks Labs project...</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-transpile-to-translate-from-oracle-to-databricks-sql/m-p/150304#M53345</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/213090"&gt;@arushigulati&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;Lakebridge (the Databricks Labs project formerly known as Remorph) does support Oracle as a source dialect for transpilation, but the DDL handling, particularly around constraints like PRIMARY KEY, has some gaps depending on the version you are running. Here is a walkthrough of how to approach this.&lt;/P&gt;
&lt;P&gt;UNDERSTANDING THE PRIMARY KEY PARSING ISSUE&lt;/P&gt;
&lt;P&gt;The Oracle dialect parser in Lakebridge may not fully handle inline PRIMARY KEY constraint definitions in all cases. This is a known area where coverage is still being expanded. If your Oracle DDL includes an inline PRIMARY KEY like this:&lt;/P&gt;
&lt;PRE&gt;CREATE TABLE CUSTOMERS_V1 (
  CUSTOMER_ID NUMBER(10) PRIMARY KEY,
  FIRST_NAME  VARCHAR2(50),
  LAST_NAME   VARCHAR2(50)
);&lt;/PRE&gt;
&lt;P&gt;...and the transpiler fails on the PRIMARY KEY clause, there are a few approaches you can use.&lt;/P&gt;
&lt;P&gt;OPTION 1: PRE-PROCESS YOUR DDL TO SEPARATE CONSTRAINTS&lt;/P&gt;
&lt;P&gt;You can extract the PRIMARY KEY constraints from the CREATE TABLE statements and handle them separately. Convert inline constraints to out-of-line (table-level) constraints before transpiling:&lt;/P&gt;
&lt;PRE&gt;CREATE TABLE CUSTOMERS_V1 (
  CUSTOMER_ID NUMBER(10) NOT NULL,
  FIRST_NAME  VARCHAR2(50),
  LAST_NAME   VARCHAR2(50)
);&lt;/PRE&gt;
&lt;P&gt;Then, after transpilation, add the PRIMARY KEY constraint as a separate ALTER TABLE statement in Databricks SQL:&lt;/P&gt;
&lt;PRE&gt;ALTER TABLE CUSTOMERS_V1 ADD CONSTRAINT pk_customers PRIMARY KEY (CUSTOMER_ID);&lt;/PRE&gt;
&lt;P&gt;Databricks supports informational PRIMARY KEY and FOREIGN KEY constraints on Unity Catalog managed and external tables (Databricks Runtime 11.3 LTS and above). These constraints are not enforced, but they are useful for documentation, query optimization hints, and compatibility with BI tools.&lt;/P&gt;
&lt;P&gt;Documentation: &lt;A href="https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-table-constraint.html" target="_blank"&gt;https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-table-constraint.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;OPTION 2: POST-PROCESS THE TRANSPILED OUTPUT&lt;/P&gt;
&lt;P&gt;Run Lakebridge transpile on your Oracle SQL files, then write a small script to add back any constraints that were dropped during transpilation. For example, you could parse the original Oracle DDL for PRIMARY KEY definitions and generate corresponding ALTER TABLE statements in Databricks SQL syntax.&lt;/P&gt;
&lt;P&gt;A simple Python approach:&lt;/P&gt;
&lt;PRE&gt;import re

oracle_ddl = open("original_oracle.sql").read()
pattern = r'CREATE\s+TABLE\s+(\w+)\s*\(.*?(\w+)\s+\w+.*?PRIMARY\s+KEY'
for match in re.finditer(pattern, oracle_ddl, re.DOTALL | re.IGNORECASE):
    table_name = match.group(1)
    col_name = match.group(2)
    print(f"ALTER TABLE {table_name} ADD CONSTRAINT pk_{table_name.lower()} PRIMARY KEY ({col_name});")&lt;/PRE&gt;
&lt;P&gt;OPTION 3: FILE A GITHUB ISSUE FOR ORACLE PRIMARY KEY SUPPORT&lt;/P&gt;
&lt;P&gt;Since Lakebridge is an open-source Databricks Labs project, you can file a bug or feature request on the GitHub repository:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/databrickslabs/remorph/issues" target="_blank"&gt;https://github.com/databrickslabs/remorph/issues&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;There are existing issues tracking Oracle dialect improvements (for example, issue #2170 for Oracle outer join syntax, and issue #1801 for Oracle DDL data type conversions). Filing an issue specifically for inline PRIMARY KEY constraint parsing in Oracle DDL will help the maintainers prioritize this.&lt;/P&gt;
&lt;P&gt;OPTION 4: USE THE DATABRICKS CLI TRANSPILE COMMAND WITH THE LATEST VERSION&lt;/P&gt;
&lt;P&gt;Make sure you are running the latest version of Lakebridge, as Oracle dialect support has been actively improved. You can install or update via:&lt;/P&gt;
&lt;PRE&gt;pip install databricks-labs-lakebridge --upgrade&lt;/PRE&gt;
&lt;P&gt;Then run the transpile with:&lt;/P&gt;
&lt;PRE&gt;databricks labs lakebridge transpile --source-dialect oracle --input-dir /path/to/oracle/sql --output-dir /path/to/output&lt;/PRE&gt;
&lt;P&gt;Check the output directory for any error reports or partial transpilation results. The tool generates a summary that shows which files transpiled successfully and which had issues.&lt;/P&gt;
&lt;P&gt;KEY POINTS ABOUT DATABRICKS CONSTRAINTS&lt;/P&gt;
&lt;P&gt;Once your DDL is transpiled, keep in mind:&lt;/P&gt;
&lt;P&gt;1. PRIMARY KEY and FOREIGN KEY constraints in Databricks are informational (not enforced). They are supported on Unity Catalog tables.&lt;/P&gt;
&lt;P&gt;2. You can add constraints during CREATE TABLE or via ALTER TABLE:&lt;/P&gt;
&lt;PRE&gt;CREATE TABLE catalog.schema.customers_v1 (
  customer_id INT NOT NULL,
  first_name STRING,
  last_name STRING,
  CONSTRAINT pk_customers PRIMARY KEY (customer_id)
);&lt;/PRE&gt;
&lt;P&gt;3. These constraints are valuable for downstream BI tools, query optimizers, and documentation even though Databricks does not enforce them at write time.&lt;/P&gt;
&lt;P&gt;ADDITIONAL RESOURCES&lt;/P&gt;
&lt;P&gt;Lakebridge documentation: &lt;A href="https://databrickslabs.github.io/lakebridge/" target="_blank"&gt;https://databrickslabs.github.io/lakebridge/&lt;/A&gt;&lt;BR /&gt;
Lakebridge GitHub repository: &lt;A href="https://github.com/databrickslabs/remorph" target="_blank"&gt;https://github.com/databrickslabs/remorph&lt;/A&gt;&lt;BR /&gt;
Databricks SQL constraints reference: &lt;A href="https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-table-constraint.html" target="_blank"&gt;https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-table-constraint.html&lt;/A&gt;&lt;BR /&gt;
Migration guide: &lt;A href="https://docs.databricks.com/en/migration/index.html" target="_blank"&gt;https://docs.databricks.com/en/migration/index.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;I hope this helps you move forward with your Oracle to Databricks migration. If you can share the specific error message you are seeing, I can provide more targeted guidance.&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Mar 2026 01:10:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-transpile-to-translate-from-oracle-to-databricks-sql/m-p/150304#M53345</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-09T01:10:39Z</dc:date>
    </item>
  </channel>
</rss>

