cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Error writing parquet to specific container in Azure Data Lake

magnus778
New Contributor III

I'm retrieving two files from container1, transforming them and merging before writing to a container2 within the same Storage Account in Azure. I'm mounting container1, unmouting and mounting countainer2 before writing.

My code for writing the parquet

spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
df_spark.coalesce(1).write.option("header",True) \
        .partitionBy('ZMTART') \
        .mode("overwrite") \
        .parquet('/mnt/temp/')

I'm getting the following error when writing to container2:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<command-3769031361803403> in <cell line: 2>()
      1 spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
----> 2 df_spark.coalesce(1).write.option("header",True) \
      3         .partitionBy('ZMTART') \
      4         .mode("overwrite") \
      5         .parquet('/mnt/temp/')
 
/databricks/spark/python/pyspark/instrumentation_utils.py in wrapper(*args, **kwargs)
     46             start = time.perf_counter()
     47             try:
---> 48                 res = func(*args, **kwargs)
     49                 logger.log_success(
     50                     module_name, class_name, function_name, time.perf_counter() - start, signature
 
/databricks/spark/python/pyspark/sql/readwriter.py in parquet(self, path, mode, partitionBy, compression)
   1138             self.partitionBy(partitionBy)
   1139         self._set_opts(compression=compression)
-> 1140         self._jwrite.parquet(path)
   1141 

The odd thing is writing the exact same dataframe to the container1 is no problem, even using the same code for writing but with different mount. Generating random data in the script and writing that to container2 is also no problem. Evidently, there is a problem with that specific dataframe in that specific container.

I'm fairly new to Databricks, so please let me know if there is additional information needed.

1 ACCEPTED SOLUTION

Accepted Solutions

Pat
Honored Contributor III

Hi @Magnus Asperudโ€‹ ,

1 mounting container1

2 you should persist the data somewhere, creating df doesnt mean that you are reading data from container and have it accessible after unmounting. Make sure to store this merged data somewhere.

Not sure if this will work

df_spark.cache()

df_spark.count()

3 unmounting

4 mounting container2

View solution in original post

2 REPLIES 2

Pat
Honored Contributor III

Hi @Magnus Asperudโ€‹ ,

1 mounting container1

2 you should persist the data somewhere, creating df doesnt mean that you are reading data from container and have it accessible after unmounting. Make sure to store this merged data somewhere.

Not sure if this will work

df_spark.cache()

df_spark.count()

3 unmounting

4 mounting container2

magnus778
New Contributor III

.cache() seems to work perfectly, thank you!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group