What is the preferred way of accessing a UC enabled SqlWarehouse table from Databricks Spark Cluster .
My requirement is to fetch the data from a sqlwarehouse table using complex queries, transform it using Pyspark notebook and save the results.
But to fetch data from UC Enabled Sql warehouse, we can use jdbc connection or we can directly access it using spark.sql().
Which one is suggested in this scenario? Also, note that I need to schedule this notebook for every 30 mins.
Note : Both the Spark cluster and Sql ware house are UC enabled