Hi @Tommy ,
Thanks for your question.
I would encourage you to verify once using a Pro SQL Warehouse temporarily instead of a Serverless SQL Warehouse given the compute differences between the two - Pro compute resides in your data plane, Serverless ...
Hi @mangosta , the System Tables docs have been updated with additional context on the `statement_text` here: https://docs.databricks.com/en/admin/system-tables/query-history.html#using-the-query-history-table
Thanks again for letting us know.
Hi @DineshOjha ,
You can reference the task-level parameters using `my_python_var = dbutils.widgets.get('my-task-level-param')`.
More information on task values can be found here: https://docs.databricks.com/en/jobs/task-values.html#set-task-values.
...
Were you able to review the documentation provided here: https://docs.databricks.com/en/compute/serverless/dependencies.html#install-notebook-dependencies?
Sure, I will presume you are using OSS Apache Spark outside of Azure Databricks. If so, you can use the delta-spark library at these maven coordinates: https://mvnrepository.com/artifact/io.delta/delta-spark_2.12.
Further information is at the Delta ...