Hi everone, I'm trying to query data in Azure Synapse Dedicated SQL Pool according to the documentaion using:
.format("com.databricks.spark.sqldw")
Query data in Azure Synapse Analytics
It says that a abfss temporary location is needed.
But I found that even I don't specify the tempDir, the following code also works in DBR above the version 13.0.
I'd like to know whether there has a documentation of this driver/connector, and where it save temporary file when I don't specify one.
df = (spark.read
.format("com.databricks.spark.sqldw")
.option("url", url)
# .option("tempDir", "abfss://tempdir@datalakefordatabricks555.dfs.core.windows.net/")
# .option("forwardSparkAzureStorageCredentials", "true")
.option("user", user)
.option("password", password)
.option("encrypt", "true")
.option("trustServerCertificate", "false")
.option("loginTimeout", "30")
.option("query", pushdown_query)
.option("fetchsize", 2000)
.load()
)
Should I worry about the following issue if I don't specify any tempDir? (which I don't want to do so if not necessary.)
"The Azure Synapse connector does not delete the temporary files that it creates in the Azure storage container. Databricks recommends that you periodically delete temporary files under the user-supplied tempDir location."
Temporary data management