cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Spark: How to simultaneously read from and write to the same parquet file

olisch
New Contributor

How can I read a DataFrame from a parquet file, do transformations and write this modified DataFrame back to the same same parquet file?

If I attempt to do so, I get an error, understandably because spark reads from the source and one cannot write back to it simultaneously. Let me reproduce the problem -

df = spark.createDataFrame([(1, 10),(2, 20),(3, 30)], ['sex','date'])

# Save as parquet

df.repartition(1).write.format('parquet').mode('overwrite').save('.../temp')

# Load it back

df = spark.read.format('parquet').load('.../temp')

# Save it back - This produces ERROR

df.repartition(1).write.format('parquet').mode('overwrite').save('.../temp')

ERROR:

java.io.FileNotFoundException: Requested file maprfs:///mapr/.../temp/part-00000-f67d5a62-36f2-4dd2-855a-846f422e623f-c000.snappy.parquet does not exist. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.

One workaround to this problem is to save the DataFrame with a differently named parquet folder -> Delete the old parquet folder -> rename this newly created parquet folder to the old name. But this is very inefficient way of doing it, not to mention those DataFrames which are having billions of rows.

I did some research and found that people are suggesting doing some REFRESH TABLE to refresh the MetaData, as can be seen here and here.

Can anyone suggest how to read and then write back to exactly the same parquet file ?

3 REPLIES 3

RamG
New Contributor II

This way

it was working ...

import org.apache.spark.sql.functions._

val df = Seq((1, 10), (2, 20), (3, 30)).toDS.toDF("sex", "date") df.show(false) // save it

df.repartition(1).write.format("parquet").mode("overwrite").save(".../temp")

// read back again val df1 = spark.read.format("parquet").load(".../temp") val df2 = df1.withColumn("cleanup" , lit("Quick silver want to cleanup"))

// BELOW 2 ARE IMPORTANT STEPS LIKE

cache
and
show
forcing a light action show(1) with out which file not found exception will come..

df2.cache // cache to avoid FileNotFoundException df2.show(2) // light action or println(df2.count) also fine

df2.repartition(1).write.format("parquet").mode("overwrite").save(".../temp")

df2.show(false)

will work for sure only thing is you need to cache and perform small action like show(1) this is altermative route thought of proposing to the users who mandatorily need it.

Result ... file : read and overwritten

0693f000007OroeAAC

Thanks.. it worked

saravananraju
New Contributor II

Hi,

You can use insertinto instead of save. It will overwrite the target file no need to cache or persist your dataframe

Df.write.format("parquet").mode("overwrite").insertInto("/file_path")

~Saravanan

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group