The correct solution is not SQL based.
Delta tables are defined by the contents of the delta log directory, not by CREATE TABLE statements. That is why SHOW CREATE TABLE cannot reconstruct partitions, properties or constraints.
The only reliable migration method in a push-only network setup is to replicate the raw Delta folders including the delta log directory.
RECOMMENDED MIGRATION PATTERN (PUSH ONLY)
On-prem: copy full Delta folders to ADLS Gen2
Example logic in Spark:
List tables in the database
For each table, read its storage location using DESCRIBE DETAIL
Recursively copy the entire table folder to ADLS, including the delta log folder and parquet files
This copies
Azure Databricks: register tables
Create the schema if missing, then create tables using:
CREATE TABLE schema.table
USING DELTA
LOCATION 'abfss path to the copied folder'
Do NOT use CTAS, INSERT, or recreate DDL.
The table is now identical to the on-prem table.
OPTIONAL INCREMENTAL REPLICATION
Use Delta Change Data Feed on-prem to push only changes.
Read the change feed starting from the last migrated version and write it to an ADLS change folder.
On Azure Databricks, MERGE the change data into the target Delta table using the primary key and change type.
WHY SQL BASED MIGRATION FAILS
SHOW CREATE TABLE
CTAS or INSERT
Copying the delta log directory