Hi, I have several Power BI reports based on Delta Lake tables that are refreshed every 4 hours. ETL process in Databricks is much cheaper that refresh of these Power BI reports. My questions are: if approach described below is correct and if there is any better way how to work with Power BI in Databricks.
My scenario:
I have several star schemas in Gold delta lake in Databricks. I use all purpose cluster for connection in Power BI.
Storage mode in Power BI for all tables is set to Import. So I import all data to PBIX file.
In Databricks job I have another cluster to run ELT and run notebook with API call to trigger PBI refresh. If interested here is tutorial.
In azure portal I can clearly see what are costs for ELT and for PBI refresh based on cluster tag.
And costs for PBI refresh are much higher that for ELT.
I tried to use sql dwh cluster for PBI refresh since star schema is written in sql, but costs were even higher despite what documentation says - "Run all SQL and BI applications at scale with up to 12x better price-performance".
Im thinking now about incremental load to PBI but there are some disadvantages of this approach - you cannot download PBIX file after this change.
Another ways could be (did not test yet) to load data to asql and connect PBI to asql instead of delta tables. There will be costs to load data to asql from delta tables after every ELT run.
Or should I create connection to Power BI in direct mode? Wouldn't be then cluster calculate data after every filter selection?
Any advice on this topic?