โ02-14-2023 04:19 AM
Hi All,
I just wanted to know if is there any option to reduce time while loading Pyspark Dataframe into the Azure synapse table using Databricks.
like..
I have a pyspark dataframe that has around 40k records and I am trying to load data into the azure synapse table using databricks it is taking almost 1.10 hrs+ to load complete data into the azure table. I am using save mode('overwrite') as per requirements.
Please let me know if any possible solution to reduce time.
Thanks,
Tinendra
โ02-14-2023 04:22 AM
Hi @Tinendra Kumarโ ,
You can increase the DTU in synapse and if possible, use append mode while saving the files that will help you to reduce the time.
โ02-14-2023 04:26 AM
Hi @Ajay Pandeyโ
I don't have any control over the azure side. Could you please tell me if is there any way/option to do this on the spark/databricks side?
โ02-14-2023 05:25 AM
Hi @Tinendra Kumarโ ,
There is no option to check your permission in databricks.
โ02-14-2023 05:04 AM
have you checked this:
https://learn.microsoft.com/en-us/azure/databricks/archive/azure/synapse-polybase
tbh I do not use Databricks to load data into synapse. I write the data as parquet /delta lake on our data lake, and use ADF to copy to synapse if necessary. this goes pretty fast.
Another option is to use Synapse Serverless or External tables on the parquet files themselves.
โ02-16-2023 09:39 PM
Hi @Tinendra Kumarโ
Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.
We'd love to hear from you.
Thanks!
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group