cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Eyespoop
by New Contributor II
  • 31858 Views
  • 4 replies
  • 4 kudos

Resolved! PySpark: Writing Parquet Files to the Azure Blob Storage Container

Currently I am having some issues with the writing of the parquet file in the Storage Container. I do have the codes running but whenever the dataframe writer puts the parquet to the blob storage instead of the parquet file type, it is created as a f...

image image(1) image(2)
  • 31858 Views
  • 4 replies
  • 4 kudos
Latest Reply
amarv
New Contributor II
  • 4 kudos

This is my approach:from databricks.sdk.runtime import dbutils from pyspark.sql.types import DataFrame output_base_url = "abfss://..." def write_single_parquet_file(df: DataFrame, filename: str): print(f"Writing '{filename}.parquet' to ABFS") ...

  • 4 kudos
3 More Replies
vroste
by New Contributor III
  • 18496 Views
  • 8 replies
  • 5 kudos

Resolved! Unsupported Azure Scheme: abfss

Using Databricks Runtime 12.0, when attempting to mount an Azure blob storage container, I'm getting the following exception:`IllegalArgumentException: Unsupported Azure Scheme: abfss` dbutils.fs.mount( source="abfss://container@my-storage-accoun...

  • 18496 Views
  • 8 replies
  • 5 kudos
Latest Reply
AdamRink
New Contributor III
  • 5 kudos

What configs did you tweak, having same issue?

  • 5 kudos
7 More Replies
Labels