When creating a data share in my Databricks workspace, in the step Add data assets, one of my schemas in Unity Catalog shows 'No data' when expanding. If I expand the same schema in Unity Catalog explorer, it shows lots of tables and all tables conta...
Hi,I have a DLT pipeline that applies changes from a source table (cdctest_cdc_enriched) to a target table (cdctest), by the following code:dlt.apply_changes( target = "cdctest", source = "cdctest_cdc_enriched", keys = ["ID"], sequence_by...
Hi,Would anyone happen to know whether it's possible to cache a dataframe in memory that the result of a query on a federated table?I have a notebook that queries a federated table, does some transformations on the dataframe and then writes this data...
Hi,absolute Databricks noob here, but I'm trying to set up a DLT pipeline that processes cdc records from an external sql server instance to create a mirrored table in my databricks delta lakehouse. For this, I need to do some initial one-time backfi...
Hi,sorry for the late reply, I went on holiday :-). The reason no tables are expanding is because the schema only contains materialized views and streaming tables, both of which are not supported in Delta Sharing (yet). Hopefully this answer can help...
Hi @Retired_mod ,mail for pricing info on support plans is sent.With regard to the preview release of Delta Lake 3.0 Delta Universal Format (UniForm), I have read the release notes but fail to see how this helps in any way towards mirroring data from...
Hi @Retired_mod ,Thanks for that, I actually would love some assistance. When I was at the databricks intelligence day in early April, I asked the guy giving the workshop about mirroring some tables from a sql server application database to the delta...
Hi Kaniz, The DLT pipeline runs without errors, and all changes of type 1 and 2 (inserts and deletes) are actually applied correctly. But, after checking the target table, apparently, the updates were not reflected in the target. I have since created...
@daniel_sahal , this is the code snippet:lsn_incr_batch = spark.sql(f"""select start_lsn,tran_begin_time,tran_end_time,tran_id,tran_begin_lsn,cast('{current_run_ts}' as timestamp) as appendedfrom externaldb.cdc.lsn_time_mappingwhere tran_end_time > '...