<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104553#M41796</link>
    <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/141200"&gt;@DBonomo&lt;/a&gt;&amp;nbsp;, did you find any workaround for this?&lt;/P&gt;</description>
    <pubDate>Tue, 07 Jan 2025 15:49:11 GMT</pubDate>
    <dc:creator>sahil_s_jain</dc:creator>
    <dc:date>2025-01-07T15:49:11Z</dc:date>
    <item>
      <title>Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS</title>
      <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104279#M41701</link>
      <description>&lt;H4&gt;&lt;SPAN&gt;Problem Description&lt;/SPAN&gt;&lt;/H4&gt;&lt;P&gt;&lt;SPAN&gt;I am attempting to upgrade my application from Databricks runtime version &lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;12.2 LTS&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt; to &lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;15.5 LTS&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;. During this upgrade, my Spark job fails with the following error:&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;java.lang.NoSuchMethodError: org.apache.spark.scheduler.SparkListenerApplicationEnd.&amp;lt;init&amp;gt;(J)V&lt;/SPAN&gt;&lt;/PRE&gt;&lt;H4&gt;&lt;SPAN&gt;Root Cause Analysis&lt;/SPAN&gt;&lt;/H4&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;Spark Version in Databricks 15.5 LTS&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: The runtime includes &lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;Apache Spark 3.5.x&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;, which defines the &lt;/SPAN&gt;&lt;SPAN&gt;SparkListenerApplicationEnd&lt;/SPAN&gt;&lt;SPAN&gt; constructor as:&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;public SparkListenerApplicationEnd(long time)&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P&gt;&lt;SPAN&gt;This constructor takes a single &lt;/SPAN&gt;&lt;SPAN&gt;long&lt;/SPAN&gt;&lt;SPAN&gt; parameter.&lt;/SPAN&gt;&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;Conflicting Spark Library in Databricks&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: The error arises due to a conflicting library: &lt;/SPAN&gt;&lt;SPAN&gt;----ws_3_5--core--core-hive-2.3__hadoop-3.2_2.12_deploy.jar&lt;/SPAN&gt;&lt;SPAN&gt;. This library includes a different version of the &lt;/SPAN&gt;&lt;SPAN&gt;SparkListenerApplicationEnd&lt;/SPAN&gt;&lt;SPAN&gt; class, which defines the constructor as:&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;public SparkListenerApplicationEnd(long time, scala.Option&amp;lt;Object&amp;gt; exitCode)&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P&gt;&lt;SPAN&gt;This is method is present&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;Spark 4.0.0-preview2&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt; version.&lt;/SPAN&gt;&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;Impact&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: At runtime, the JVM attempts to use the single-parameter constructor (&lt;/SPAN&gt;&lt;SPAN&gt;&amp;lt;init&amp;gt;(J)V&lt;/SPAN&gt;&lt;SPAN&gt;) but fails because the conflicting library expects the two-parameter version. This mismatch leads to the &lt;/SPAN&gt;&lt;SPAN&gt;NoSuchMethodError&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;SPAN&gt;Thank you in advance for your support!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Jan 2025 05:59:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104279#M41701</guid>
      <dc:creator>sahil_s_jain</dc:creator>
      <dc:date>2025-01-06T05:59:50Z</dc:date>
    </item>
    <item>
      <title>Re: Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS</title>
      <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104329#M41706</link>
      <description>&lt;P class="_1t7bu9h1 paragraph"&gt;&lt;SPAN&gt;The error you are encountering seems to be related to&amp;nbsp;the runtime includes Apache Spark 3.5.x, which defines the constructor as &lt;CODE&gt;public SparkListenerApplicationEnd(long time)&lt;/CODE&gt;, while a conflicting library (&lt;CODE&gt;----ws_3_5--core--core-hive-2.3__hadoop-3.2_2.12_deploy.jar&lt;/CODE&gt;) expects a different version of the constructor: &lt;CODE&gt;public SparkListenerApplicationEnd(long time, scala.Option&amp;lt;Object&amp;gt; exitCode)&lt;/CODE&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;DIV class="_1sijkvt3"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;This issue occurs because the JVM attempts to use the single-parameter constructor but fails due to the conflicting library expecting the two-parameter version, leading to the &lt;CODE&gt;NoSuchMethodError&lt;/CODE&gt;.&lt;/P&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;To resolve this issue, you can try the following steps:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;&lt;STRONG&gt;Identify and Remove Conflicting Libraries&lt;/STRONG&gt;: Check your dependencies and remove or update the conflicting library that includes the SparkListenerApplicationEnd class with the two-parameter constructor. Ensure that all libraries are compatible with Apache Spark 3.5.x.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;&lt;SPAN&gt;&lt;STRONG&gt;Update Dependencies&lt;/STRONG&gt;: Ensure that all your project dependencies are updated to versions compatible with Databricks runtime 15.5 LTS and Apache Spark 3.5.x.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Mon, 06 Jan 2025 12:09:09 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104329#M41706</guid>
      <dc:creator>Walter_C</dc:creator>
      <dc:date>2025-01-06T12:09:09Z</dc:date>
    </item>
    <item>
      <title>Re: Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS</title>
      <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104332#M41709</link>
      <description>&lt;P&gt;The issue is because Databricks 15.4 LTS&amp;nbsp;includes&amp;nbsp;&lt;SPAN&gt;ws_3_5--core--core-hive-2.3__hadoop-3.2_2.12_deploy.jar library which is not compatible with Spark 3.5.x version. Spark 3.5.x version contains single argument&amp;nbsp;SparkListenerApplicationEnd constructor.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Databrick 15.4 LTS includes Spark 3.5.x and&amp;nbsp;ws_3_5--core--core-hive-2.3__hadoop-3.2_2.12_deploy.jar library should be compatible with Spark 3.5.x. But it is not.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Jan 2025 12:19:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104332#M41709</guid>
      <dc:creator>sahil_s_jain</dc:creator>
      <dc:date>2025-01-06T12:19:33Z</dc:date>
    </item>
    <item>
      <title>Re: Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS</title>
      <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104500#M41770</link>
      <description>&lt;P&gt;I am trying to initialize class org.apache.spark.scheduler.SparkListenerApplicationEnd with databricks 15.4LTS.&lt;/P&gt;&lt;P&gt;Spark 3.5.0 expects a single argument constructor for org.apache.spark.scheduler.SparkListenerApplicationEnd(long time)&lt;/P&gt;&lt;P&gt;Whereas the class packaged in Databricks jar "----ws_3_5--core--core-hive-2.3__hadoop-3.2_2.12_deploy.jar" expects a 2 argument constructor i.e.&lt;/P&gt;&lt;P&gt;org.apache.spark.scheduler.SparkListenerApplicationEnd(long time, scala.Option&amp;lt;Object&amp;gt; exitCode)&lt;/P&gt;&lt;P&gt;This 2 argument constructor is in line with Spark 4.0.0-preview2 version and NOT IN spark version 3.5.0&lt;/P&gt;&lt;P&gt;This is causing a conflict, can you please check this version issue in the Databricks cluster binaries.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Jan 2025 12:03:48 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104500#M41770</guid>
      <dc:creator>sahil_s_jain</dc:creator>
      <dc:date>2025-01-07T12:03:48Z</dc:date>
    </item>
    <item>
      <title>Re: Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS</title>
      <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104549#M41794</link>
      <description>&lt;P&gt;I can attest to this being the case as well. I ran into this issue trying to implement and updated form of the&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&lt;SPAN&gt;com.microsoft.sqlserver.jdbc.spark connector, and found that the implementation in DBR 15.4LTS is actually mapped to master (the current spark 4.0 working branch).&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DBonomo_0-1736264319353.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/13912i70EB1D5C14EB78AC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="DBonomo_0-1736264319353.png" alt="DBonomo_0-1736264319353.png" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DBonomo_1-1736264341599.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/13913iAD337EC569A8EA8B/image-size/medium?v=v2&amp;amp;px=400" role="button" title="DBonomo_1-1736264341599.png" alt="DBonomo_1-1736264341599.png" /&gt;&lt;/span&gt;&lt;P&gt;You can reference the 3.5 implementation&amp;nbsp;&lt;A href="https://github.com/apache/spark/blob/branch-3.5/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala" target="_self"&gt;here&lt;/A&gt;&amp;nbsp;compared to the master branch version&amp;nbsp;&lt;A href="https://github.com/apache/spark/blob/204c6729811789cd271627f5d45dfda92176e119/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L271" target="_self"&gt;here.&lt;/A&gt;&lt;/P&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Tue, 07 Jan 2025 15:39:53 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104549#M41794</guid>
      <dc:creator>DBonomo</dc:creator>
      <dc:date>2025-01-07T15:39:53Z</dc:date>
    </item>
    <item>
      <title>Re: Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS</title>
      <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104553#M41796</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/141200"&gt;@DBonomo&lt;/a&gt;&amp;nbsp;, did you find any workaround for this?&lt;/P&gt;</description>
      <pubDate>Tue, 07 Jan 2025 15:49:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104553#M41796</guid>
      <dc:creator>sahil_s_jain</dc:creator>
      <dc:date>2025-01-07T15:49:11Z</dc:date>
    </item>
    <item>
      <title>Re: Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS</title>
      <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104577#M41802</link>
      <description>&lt;P&gt;No I am currently downgrading to an older DBR (13.3) and running these jobs specifically on that version. That brings it's own suite of problems though.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Jan 2025 17:20:08 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/104577#M41802</guid>
      <dc:creator>DBonomo</dc:creator>
      <dc:date>2025-01-07T17:20:08Z</dc:date>
    </item>
    <item>
      <title>Re: Issue: NoSuchMethodError in Spark Job While Upgrading to Databricks 15.5 LTS</title>
      <link>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/128909#M48370</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/141200"&gt;@DBonomo&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/136821"&gt;@sahil_s_jain&lt;/a&gt;&amp;nbsp;We can write separate getSchema method inside the BulkCopyUtils.scala file and call that method instead of referring it from spark. You can add the below function in the&amp;nbsp;BulkCopyUtils.scala file and build it locally. You can then call this function --&amp;gt; `val tableCols = BulkCopyJdbcUtils.getSchema(rs, JdbcDialects.get(url))`&lt;/P&gt;
&lt;LI-CODE lang="ruby"&gt;import org.apache.spark.sql.jdbc.JdbcDialect


/**
* Utility object containing getSchema implementation for Spark 3.5
* This replaces the call to org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils.getSchema
* to avoid method signature conflicts in DBR 15.4
*/
object BulkCopyJdbcUtils {

/**
* Takes a [[ResultSet]] and returns its Catalyst schema.
* This is the Spark 3.5 version with 3 parameters.
*
* @param resultSet The ResultSet to extract schema from
* @param dialect The JDBC dialect to use for type mapping
* @param alwaysNullable If true, all the columns are nullable.
* @return A [[StructType]] giving the Catalyst schema.
* @throws SQLException if the schema contains an unsupported type.
*/
def getSchema(
resultSet: ResultSet,
dialect: JdbcDialect,
alwaysNullable: Boolean = false): StructType = {
val rsmd = resultSet.getMetaData
val ncols = rsmd.getColumnCount
val fields = new Array[StructField](ncols)
var i = 0
while (i &amp;lt; ncols) {
val columnName = rsmd.getColumnLabel(i + 1)
val dataType = rsmd.getColumnType(i + 1)
val typeName = rsmd.getColumnTypeName(i + 1)
val fieldSize = rsmd.getPrecision(i + 1)
val fieldScale = rsmd.getScale(i + 1)
val isSigned = {
try {
rsmd.isSigned(i + 1)
} catch {
// Workaround for HIVE-14684:
case e: SQLException if
e.getMessage == "Method not supported" &amp;amp;&amp;amp;
rsmd.getClass.getName == "org.apache.hive.jdbc.HiveResultSetMetaData" =&amp;gt; true
}
}
val nullable = if (alwaysNullable) {
true
} else {
rsmd.isNullable(i + 1) != ResultSetMetaData.columnNoNulls
}
val metadata = new MetadataBuilder()
.putString("name", columnName)
.putLong("scale", fieldScale)
.build()

val columnType = getCatalystType(dataType, typeName, fieldSize, fieldScale, isSigned)
fields(i) = StructField(columnName, columnType, nullable, metadata)
i = i + 1
}
new StructType(fields)
}

/**
* Maps a JDBC type to a Catalyst type using Spark 3.5 logic.
* Fixed DecimalType.bounded compatibility issue.
*/
private def getCatalystType(
sqlType: Int,
typeName: String,
precision: Int,
scale: Int,
signed: Boolean): DataType = {

val answer = sqlType match {
// scalastyle:off
case java.sql.Types.ARRAY =&amp;gt; null
case java.sql.Types.BIGINT =&amp;gt; if (signed) { LongType } else { DecimalType(20,0) }
case java.sql.Types.BINARY =&amp;gt; BinaryType
case java.sql.Types.BIT =&amp;gt; BooleanType // @see JdbcDialect for quirks
case java.sql.Types.BLOB =&amp;gt; BinaryType
case java.sql.Types.BOOLEAN =&amp;gt; BooleanType
case java.sql.Types.CHAR =&amp;gt; StringType
case java.sql.Types.CLOB =&amp;gt; StringType
case java.sql.Types.DATALINK =&amp;gt; null
case java.sql.Types.DATE =&amp;gt; DateType
case java.sql.Types.DECIMAL
if precision != 0 || scale != 0 =&amp;gt; createDecimalType(precision, scale)
case java.sql.Types.DECIMAL =&amp;gt; DecimalType.SYSTEM_DEFAULT
case java.sql.Types.DISTINCT =&amp;gt; null
case java.sql.Types.DOUBLE =&amp;gt; DoubleType
case java.sql.Types.FLOAT =&amp;gt; FloatType
case java.sql.Types.INTEGER =&amp;gt; if (signed) { IntegerType } else { LongType }
case java.sql.Types.JAVA_OBJECT =&amp;gt; null
case java.sql.Types.LONGNVARCHAR =&amp;gt; StringType
case java.sql.Types.LONGVARBINARY =&amp;gt; BinaryType
case java.sql.Types.LONGVARCHAR =&amp;gt; StringType
case java.sql.Types.NCHAR =&amp;gt; StringType
case java.sql.Types.NCLOB =&amp;gt; StringType
case java.sql.Types.NULL =&amp;gt; NullType
case java.sql.Types.NUMERIC
if precision != 0 || scale != 0 =&amp;gt; createDecimalType(precision, scale)
case java.sql.Types.NUMERIC =&amp;gt; DecimalType.SYSTEM_DEFAULT
case java.sql.Types.NVARCHAR =&amp;gt; StringType
case java.sql.Types.OTHER =&amp;gt; null
case java.sql.Types.REAL =&amp;gt; DoubleType
case java.sql.Types.REF =&amp;gt; StringType
case java.sql.Types.REF_CURSOR =&amp;gt; null
case java.sql.Types.ROWID =&amp;gt; LongType
case java.sql.Types.SMALLINT =&amp;gt; IntegerType
case java.sql.Types.SQLXML =&amp;gt; StringType
case java.sql.Types.STRUCT =&amp;gt; StringType
case java.sql.Types.TIME =&amp;gt; TimestampType
case java.sql.Types.TIME_WITH_TIMEZONE =&amp;gt; null
case java.sql.Types.TIMESTAMP =&amp;gt; TimestampType
case java.sql.Types.TIMESTAMP_WITH_TIMEZONE =&amp;gt; null
case java.sql.Types.TINYINT =&amp;gt; IntegerType
case java.sql.Types.VARBINARY =&amp;gt; BinaryType
case java.sql.Types.VARCHAR =&amp;gt; StringType
case _ =&amp;gt;
throw new SQLException("Unrecognized SQL type " + sqlType)
// scalastyle:on
}

if (answer == null) {
throw new SQLException("Unsupported type " + sqlType)
}
answer
}

/**
* Helper method to create DecimalType with proper bounds checking
* This replaces DecimalType.bounded which may not be accessible
*/
private def createDecimalType(precision: Int, scale: Int): DecimalType = {
// Ensure precision and scale are within valid bounds
val validPrecision = math.min(math.max(precision, 1), DecimalType.MAX_PRECISION)
val validScale = math.min(math.max(scale, 0), validPrecision)

try {
// Try the standard constructor first
DecimalType(validPrecision, validScale)
} catch {
case _: Exception =&amp;gt;
// Fallback to system default if constructor fails
DecimalType.SYSTEM_DEFAULT
}
}
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Aug 2025 19:24:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/issue-nosuchmethoderror-in-spark-job-while-upgrading-to/m-p/128909#M48370</guid>
      <dc:creator>ameerafi</dc:creator>
      <dc:date>2025-08-19T19:24:10Z</dc:date>
    </item>
  </channel>
</rss>

