Hi!
I have a workflow that includes my medallion architecture and DLT. Currently, I have a separate notebook for refreshing my Power BI semantic model, which works based on the method described in Refresh a PowerBI dataset from Azure Databricks. However, now that Iām placing the notebook at the end of my workflow, Iām unsure how to set it all up properly. What is the best practice regarding connecting a PBI semantic model to a databricks (medallion) workflow?
When trying out, I stop at the step where I've to select a source in Power BI ("Get Data"), I have to choose a cluster HTTP. But since the workflow runs on serverless cluster, Iām not sure how this fits together... Also, should I select "Azure Databricks" instead of just "Databricks," since my storage is in a Storage Account and not Delta Lake? just to confirm.
Additionally, Iām looking into how to manage dataset refreshes in the Power BI semantic model. Is there a way to determine whether a full refresh is needed? Why refresh fully when the update is minimal? In Analysis Services, there is an "Update" process, but the full process will refresh all data. How do I manage the type of refresh in Power BI or Databricks?
Thanks!