cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Error writing parquet to specific container in Azure Data Lake

magnus778
New Contributor III

I'm retrieving two files from container1, transforming them and merging before writing to a container2 within the same Storage Account in Azure. I'm mounting container1, unmouting and mounting countainer2 before writing.

My code for writing the parquet

spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
df_spark.coalesce(1).write.option("header",True) \
        .partitionBy('ZMTART') \
        .mode("overwrite") \
        .parquet('/mnt/temp/')

I'm getting the following error when writing to container2:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<command-3769031361803403> in <cell line: 2>()
      1 spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
----> 2 df_spark.coalesce(1).write.option("header",True) \
      3         .partitionBy('ZMTART') \
      4         .mode("overwrite") \
      5         .parquet('/mnt/temp/')
 
/databricks/spark/python/pyspark/instrumentation_utils.py in wrapper(*args, **kwargs)
     46             start = time.perf_counter()
     47             try:
---> 48                 res = func(*args, **kwargs)
     49                 logger.log_success(
     50                     module_name, class_name, function_name, time.perf_counter() - start, signature
 
/databricks/spark/python/pyspark/sql/readwriter.py in parquet(self, path, mode, partitionBy, compression)
   1138             self.partitionBy(partitionBy)
   1139         self._set_opts(compression=compression)
-> 1140         self._jwrite.parquet(path)
   1141 

The odd thing is writing the exact same dataframe to the container1 is no problem, even using the same code for writing but with different mount. Generating random data in the script and writing that to container2 is also no problem. Evidently, there is a problem with that specific dataframe in that specific container.

I'm fairly new to Databricks, so please let me know if there is additional information needed.

1 ACCEPTED SOLUTION

Accepted Solutions

Pat
Honored Contributor III

Hi @Magnus Asperud​ ,

1 mounting container1

2 you should persist the data somewhere, creating df doesnt mean that you are reading data from container and have it accessible after unmounting. Make sure to store this merged data somewhere.

Not sure if this will work

df_spark.cache()

df_spark.count()

3 unmounting

4 mounting container2

View solution in original post

2 REPLIES 2

Pat
Honored Contributor III

Hi @Magnus Asperud​ ,

1 mounting container1

2 you should persist the data somewhere, creating df doesnt mean that you are reading data from container and have it accessible after unmounting. Make sure to store this merged data somewhere.

Not sure if this will work

df_spark.cache()

df_spark.count()

3 unmounting

4 mounting container2

magnus778
New Contributor III

.cache() seems to work perfectly, thank you!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.