cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How do I prevent _success and _committed files in my write output?

PradeepRavi
New Contributor III

Is there a way to prevent the _success and _committed files in my output. It's a tedious task to navigate to all the partitions and delete the files.

Note : Final output is stored in Azure ADLS

6 REPLIES 6

AndrewSears
New Contributor III

This was recommended on StackOverflow though I haven't tested with ADLS yet.

sc._jsc.hadoopConfiguration().set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")

Note it may impact the whole cluster.

You could also use dbutils.fs.rm step to remove any created files.

cheers,

Andrew

This solution is working in my local intellij setup but not with Databricks notebook setup.

AndrewSears
New Contributor III

Did you try with a new Databricks cluster using initialization scripts?

https://docs.databricks.com/user-guide/clusters/init-scripts.html

DD_Sharma
New Contributor III

A combination of below three properties will help to disable writing all the transactional files which start with "_".

  1. We can disable the transaction logs of spark parquet write using "spark.sql.sources.commitProtocolClass = org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol". This will help to disable the "committed<TID>" and "started<TID>" files but still _SUCCESS, _common_metadata and _metadata files will generate.
  2. We can disable the _common_metadata and _metadata files using "parquet.enable.summary-metadata=false".
  3. We can also disable the _SUCCESS file using "mapreduce.fileoutputcommitter.marksuccessfuljobs=false".

This is very helpful ... Thanks for the information ... Just to add more info to it if somebody wants to disable it at Cluster level for Spark 2.4.5. they can edit the Spark Cluster -) Advanced Options and add above but you need to use <variable> <value> like below :

parquet.enable.summary-metadata false

If you want to add it in databricks notebook you can do like this:

spark.conf.set("parquet.enable.summary-metadata", "false")

shan_chandra
Databricks Employee
Databricks Employee

Please find the below steps to remove _SUCCESS, _committed and _started files.

  1. spark.conf.set("spark.databricks.io.directoryCommit.createSuccessFile","false") to remove success file.
  2. run vacuum command multiple times until _committed and _started files are removed.
spark.sql("VACUUM '<file-location>' RETAIN 0 HOURS")

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group