cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks S3A error - java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found

77796
New Contributor II

We are getting the below error for runtime 10.x and 11.x when writing to s3 via saveAsNewAPIHadoopFile function. The same jobs are running fine on runtime 9.x and 7.x. The difference betwen 9.x and 10.x is the former has hadoop 2.7 bindings with spark 3.1 whereas latter has hadoop 3.2 bindings with spark 3.2. Is databricks runtime missing some jars? Any help is appreciated.

java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found

               at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2720)

               at org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.getCommitterFactory(PathOutputCommitterFactory.java:179)

               at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:336)

               at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupCommitter(HadoopMapReduceCommitProtocol.scala:116)

               at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:195)

               at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:83)

               at org.apache.spark.rdd.PairRDDFunctions.$anonfun$saveAsNewAPIHadoopDataset$1(PairRDDFunctions.scala:1078)

               at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)

               at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165)

               at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:125)

               at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)

               at org.apache.spark.rdd.RDD.withScope(RDD.scala:411)

               at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1076)

               at org.apache.spark.rdd.PairRDDFunctions.$anonfun$saveAsNewAPIHadoopFile$2(PairRDDFunctions.scala:995)

               at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)

               at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165)

               at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:125)

               at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)

               at org.apache.spark.rdd.RDD.withScope(RDD.scala:411)

               at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:986)

4 REPLIES 4

Anonymous
Not applicable

Why are you using that saveAsNewAPI function?

77796
New Contributor II

We have some internal OutputFileFormatter for mainframe and fixedlength data formats to support our Data Integration and Data Quality tools. We have been using them for legacy reasons and it was working till 9.x runtime version.

77796
New Contributor II

We can reproduce the above error for runtime 10.x and 11.x using the below code in a notebook.

import org.apache.hadoop.io.IntWritable

import org.apache.hadoop.io.Text

import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat

import org.apache.spark.rdd.PairRDDFunctions

val l = List((10,"a"),(20,"b"),(30,"c"),(40,"d"))

val rdd = sc.parallelize(l)

val rddWritable = rdd.map(x=> (new IntWritable(x._1), new Text(x._2)))

val pairRDD = new PairRDDFunctions(rddWritable)

pairRDD.saveAsNewAPIHadoopFile("s3a://bucket/testout.dat",

 classOf[IntWritable],

 classOf[Text],

 classOf[TextOutputFormat[IntWritable,Text]],

 spark.sparkContext.hadoopConfiguration)

77796
New Contributor II

We have resolved this issue by using s3 scheme instead of s3a i.e. pairRDD.saveAsNewAPIHadoopFile("s3://bucket/testout.dat",

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group