Hi @john77 ,
When you have a SQL task creating ST/MV ,it works fine for a few independent tables. You do get incremental refresh, retries, and an auto-created (implicit) pipeline per objectBut what you miss is that ,it is Harder to mix SQL + Python...
Hello @john77 ,
Lakeflow ETL Pipelines give you a managed, declarative engine that understands your tables/flows and runs them with automatic dependency resolution, retries, and incremental semantics. Jobs are the general-purpose orchestrator—they ca...
Hello @saurabh18cs !
You don’t need to choose queues and simply use the “File events” path. When enabled, Databricks uses one managed queue per external location (Unity Catalogue), and all your streams that read from that location share it. This avoi...
Hello @jakesippy ,
Instead of using any rest APIs, just run the below cell using spark.sql() programatically, and you will get all the info. The query below will depend on the pipeline event logs and will always give you accurate information.
%sql
DR...
Hello @Travis84 ,
Below are the answers to your questions:
Where to put the hint? On either one of the two relations that participate in the range join for that specific join block. In simple two-table queries, it doesn’t matter. In multi-join querie...