cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

The default location of temporary file in Azure Synapse Connector(com.databricks.spark.sqldw)

Chengcheng
New Contributor III

Hi everone, I'm trying to query data in Azure Synapse Dedicated SQL Pool according to the documentaion using:

.format("com.databricks.spark.sqldw")

 

Query data in Azure Synapse Analytics

It says that a abfss temporary location is needed.

But I found that even I don't specify the tempDir, the following code also works in DBR above the version 13.0. 

I'd like to know whether there has a documentation of this driver/connector, and where it save temporary file when I don't specify one.

 

 

 

df = (spark.read
  .format("com.databricks.spark.sqldw")
.option("url", url)
# .option("tempDir", "abfss://tempdir@datalakefordatabricks555.dfs.core.windows.net/")
# .option("forwardSparkAzureStorageCredentials", "true")
.option("user", user)
.option("password", password)
.option("encrypt", "true")
.option("trustServerCertificate", "false")
.option("loginTimeout", "30")
.option("query", pushdown_query)
.option("fetchsize", 2000)
.load()
)

 

 

 

Should I worry about the following issue if I don't specify any tempDir? (which I don't want to do so if not necessary.)

"The Azure Synapse connector does not delete the temporary files that it creates in the Azure storage container. Databricks recommends that you periodically delete temporary files under the user-supplied tempDir location."

Temporary data management

0 REPLIES 0
Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.