Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Showing results for 
Search instead for 
Did you mean: 

How can we write a pandas dataframe into azure adls as excel file, when trying to write it is showing error as protocol not known 'abfss' like that.

Contributor II

Esteemed Contributor

Can you please share the code snippet?

Contributor II

Currently, as per my understanding, there is no support available in databricks to write into excel file using python. Suggested solution would be to convert pandas Dataframe to spark Dataframe and then use Spark Excel connector to write into excel files. This link explains the details clearly for the same requirement.

so for that we don't have a option to add background color and not able to autofit the rows and columns​

Valued Contributor II


you need to authenticate the abfss

Configure authentication
service_credential = dbutils.secrets.get(scope="<scope>",key="<service-credential-key>")
spark.conf.set("<storage-account>", "OAuth")
spark.conf.set("<storage-account>", "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
spark.conf.set("<storage-account>", "<application-id>")
spark.conf.set("<storage-account>", service_credential)
spark.conf.set("<storage-account>", "<directory-id>/oauth2/token")

you can check out below two links:

Esteemed Contributor III

Please mount ADLS storage as described here:

And then write pandas to excel to that directory.



Hi @Hubert Dudek​,

Pandas API doesn't support abfss protocol.

You have three options:

  • If you need to use pandas, you can write the excel to the local file system (dbfs) and then move it to ABFSS (for example with dbutils)
  • Write as csv directly in abfss with the Spark API (without using pandas)
  • Write the dataframe as excel with the Spark API directly in abfss with a library like (without using pandas)


Fernando Arribas

Esteemed Contributor III

But once you mount it, you can write as it is visible as a dbfs directory.

Have you tried writing to the local file system (for example, in the path /databricks/...​)

Anyway, i recommend you to tyr writing with Spark (without dataframes). Pandas without additional libraries, doesn't distribute and with high volumes you will have memory problems, performance problems...

Esteemed Contributor III

It is enough to use pandas on a spark, so it is distributed. Additionally, pandas have the to_excel method, but spark data-frames do not.

I'm not sure about that. When you call the function to_excel all the data is loaded into the driver (as if you were doing a collect). So, the writing is not distributed and you can have memory and performance problems as I mentioned.



Try writing with this library:

Example (




Under the general scenario this shouldn't be an issue since an Excel file can only handle a little over a million rows anyway. Saying that, your statement that this should be written to dbfs and use dbutils to move the file to abfs should be the accepted answer.

i done that, but in spark df not able​ to add bg color and cell alignment based on values in excel sheet

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!