How do I prevent _success and _committed files in my write output?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-01-2018 09:36 PM
Is there a way to prevent the _success and _committed files in my output. It's a tedious task to navigate to all the partitions and delete the files.
Note : Final output is stored in Azure ADLS
- Labels:
-
Azure-databricks
-
Dataframes
-
Spark
-
Spark-sql
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2018 04:15 AM
This was recommended on StackOverflow though I haven't tested with ADLS yet.
sc._jsc.hadoopConfiguration().set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
Note it may impact the whole cluster.
You could also use dbutils.fs.rm step to remove any created files.
cheers,
Andrew
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2018 04:30 AM
This solution is working in my local intellij setup but not with Databricks notebook setup.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2018 08:46 AM
Did you try with a new Databricks cluster using initialization scripts?
https://docs.databricks.com/user-guide/clusters/init-scripts.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-24-2020 04:53 AM
A combination of below three properties will help to disable writing all the transactional files which start with "_".
- We can disable the transaction logs of spark parquet write using "spark.sql.sources.commitProtocolClass = org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol". This will help to disable the "committed<TID>" and "started<TID>" files but still _SUCCESS, _common_metadata and _metadata files will generate.
- We can disable the _common_metadata and _metadata files using "parquet.enable.summary-metadata=false".
- We can also disable the _SUCCESS file using "mapreduce.fileoutputcommitter.marksuccessfuljobs=false".
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-12-2020 07:22 PM
This is very helpful ... Thanks for the information ... Just to add more info to it if somebody wants to disable it at Cluster level for Spark 2.4.5. they can edit the Spark Cluster -) Advanced Options and add above but you need to use <variable> <value> like below :
parquet.enable.summary-metadata false
If you want to add it in databricks notebook you can do like this:
spark.conf.set("parquet.enable.summary-metadata", "false")
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2022 11:57 AM
Please find the below steps to remove _SUCCESS, _committed and _started files.
- spark.conf.set("spark.databricks.io.directoryCommit.createSuccessFile","false") to remove success file.
- run vacuum command multiple times until _committed and _started files are removed.
spark.sql("VACUUM '<file-location>' RETAIN 0 HOURS")

