<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Databricks S3A error - java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33772#M24710</link>
    <description>&lt;P&gt;We can reproduce the above error for runtime 10.x and 11.x using the below code in a notebook. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;import org.apache.hadoop.io.IntWritable&lt;/P&gt;&lt;P&gt;import org.apache.hadoop.io.Text&lt;/P&gt;&lt;P&gt;import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat&lt;/P&gt;&lt;P&gt;import org.apache.spark.rdd.PairRDDFunctions&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;val l = List((10,"a"),(20,"b"),(30,"c"),(40,"d"))&lt;/P&gt;&lt;P&gt;val rdd = sc.parallelize(l)&lt;/P&gt;&lt;P&gt;val rddWritable = rdd.map(x=&amp;gt; (new IntWritable(x._1), new Text(x._2)))&lt;/P&gt;&lt;P&gt;val pairRDD = new PairRDDFunctions(rddWritable)&lt;/P&gt;&lt;P&gt;pairRDD.saveAsNewAPIHadoopFile("s3a://bucket/testout.dat",&lt;/P&gt;&lt;P&gt;&amp;nbsp;classOf[IntWritable],&lt;/P&gt;&lt;P&gt;&amp;nbsp;classOf[Text],&lt;/P&gt;&lt;P&gt;&amp;nbsp;classOf[TextOutputFormat[IntWritable,Text]],&lt;/P&gt;&lt;P&gt;&amp;nbsp;spark.sparkContext.hadoopConfiguration)&lt;/P&gt;</description>
    <pubDate>Sat, 27 Aug 2022 18:48:12 GMT</pubDate>
    <dc:creator>77796</dc:creator>
    <dc:date>2022-08-27T18:48:12Z</dc:date>
    <item>
      <title>Databricks S3A error - java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33769#M24707</link>
      <description>&lt;P&gt;We are getting the below error for runtime  10.x and 11.x  when writing to s3 via saveAsNewAPIHadoopFile function. The same jobs are running fine on runtime 9.x and 7.x. The difference betwen 9.x and 10.x is the former has hadoop 2.7 bindings with spark 3.1 whereas latter has hadoop 3.2 bindings with spark 3.2. Is databricks runtime missing some jars? Any help is appreciated.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2720)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.getCommitterFactory(PathOutputCommitterFactory.java:179)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:336)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupCommitter(HadoopMapReduceCommitProtocol.scala:116)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:195)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:83)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.PairRDDFunctions.$anonfun$saveAsNewAPIHadoopDataset$1(PairRDDFunctions.scala:1078)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:125)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDD.withScope(RDD.scala:411)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1076)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.PairRDDFunctions.$anonfun$saveAsNewAPIHadoopFile$2(PairRDDFunctions.scala:995)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:125)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDD.withScope(RDD.scala:411)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:986)&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 16:21:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33769#M24707</guid>
      <dc:creator>77796</dc:creator>
      <dc:date>2022-08-23T16:21:15Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks S3A error - java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33770#M24708</link>
      <description>&lt;P&gt;Why are you using that saveAsNewAPI function?  &lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 20:48:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33770#M24708</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2022-08-23T20:48:00Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks S3A error - java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33771#M24709</link>
      <description>&lt;P&gt;We have some internal OutputFileFormatter for mainframe and fixedlength data formats to support our Data Integration and Data Quality tools. We have been using them for legacy reasons and it was working till 9.x runtime version.&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 21:27:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33771#M24709</guid>
      <dc:creator>77796</dc:creator>
      <dc:date>2022-08-23T21:27:50Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks S3A error - java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33772#M24710</link>
      <description>&lt;P&gt;We can reproduce the above error for runtime 10.x and 11.x using the below code in a notebook. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;import org.apache.hadoop.io.IntWritable&lt;/P&gt;&lt;P&gt;import org.apache.hadoop.io.Text&lt;/P&gt;&lt;P&gt;import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat&lt;/P&gt;&lt;P&gt;import org.apache.spark.rdd.PairRDDFunctions&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;val l = List((10,"a"),(20,"b"),(30,"c"),(40,"d"))&lt;/P&gt;&lt;P&gt;val rdd = sc.parallelize(l)&lt;/P&gt;&lt;P&gt;val rddWritable = rdd.map(x=&amp;gt; (new IntWritable(x._1), new Text(x._2)))&lt;/P&gt;&lt;P&gt;val pairRDD = new PairRDDFunctions(rddWritable)&lt;/P&gt;&lt;P&gt;pairRDD.saveAsNewAPIHadoopFile("s3a://bucket/testout.dat",&lt;/P&gt;&lt;P&gt;&amp;nbsp;classOf[IntWritable],&lt;/P&gt;&lt;P&gt;&amp;nbsp;classOf[Text],&lt;/P&gt;&lt;P&gt;&amp;nbsp;classOf[TextOutputFormat[IntWritable,Text]],&lt;/P&gt;&lt;P&gt;&amp;nbsp;spark.sparkContext.hadoopConfiguration)&lt;/P&gt;</description>
      <pubDate>Sat, 27 Aug 2022 18:48:12 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33772#M24710</guid>
      <dc:creator>77796</dc:creator>
      <dc:date>2022-08-27T18:48:12Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks S3A error - java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory not found</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33773#M24711</link>
      <description>&lt;P&gt;We have resolved this issue by using s3 scheme instead of s3a i.e. pairRDD.saveAsNewAPIHadoopFile("s3://bucket/testout.dat",&lt;/P&gt;</description>
      <pubDate>Sun, 28 Aug 2022 16:25:08 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-s3a-error-java-lang-classnotfoundexception-class-org/m-p/33773#M24711</guid>
      <dc:creator>77796</dc:creator>
      <dc:date>2022-08-28T16:25:08Z</dc:date>
    </item>
  </channel>
</rss>

