cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

NoClassDefFoundError: scala/Product$class

YSDPrasad
New Contributor III

import com.microsoft.azure.sqldb.spark.config.Config

import com.microsoft.azure.sqldb.spark.connect._

import com.microsoft.azure.sqldb.spark.query._

val query = "Truncate table tablename"

val config = Config(Map(

 "url"     -> dbutils.secrets.get(scope = "azurekeyvault-scope", key = "DW-URL"),

 "databaseName" -> dbutils.secrets.get(scope = "azurekeyvault-scope", key = "DW-DBName"),

 "user"     -> dbutils.secrets.get(scope = "azurekeyvault-scope", key = "DW-Username"),

 "password"   -> dbutils.secrets.get(scope = "azurekeyvault-scope", key = "DW-Password"),

 "queryCustom" -> query

))

sqlContext.sqlDBQuery(config)

While executing the above command in cluster 12.0 runtime version i am facing an error like NoClassDefFoundError

1 ACCEPTED SOLUTION

Accepted Solutions

YSDPrasad
New Contributor III

Yes suteja by adding additional jar files I can resolve this issue but while running the read and write operation by connecting sqldb there I am facing error. I found different code to execute truncate command now it is working fine.

View solution in original post

3 REPLIES 3

Anonymous
Not applicable

@Someswara Durga Prasad Yaralgadda​ :

The NoClassDefFoundError error occurs when a class that was available during the compile time is not available at the runtime. This could be due to a few reasons, including a missing dependency or an incompatible version of a dependency.

In your case, it seems like the error is related to the Azure SQL DB Spark connector library. The missing class could be a part of this library. You can try the following steps to resolve the issue:

  1. Check that the necessary Azure SQL DB Spark connector library is available in your cluster's runtime environment. You can try to add the dependency explicitly using the %AddJar magic command.
  2. Make sure that the version of the Azure SQL DB Spark connector library matches the version that is compatible with your Spark cluster's runtime version.

YSDPrasad
New Contributor III

Yes suteja by adding additional jar files I can resolve this issue but while running the read and write operation by connecting sqldb there I am facing error. I found different code to execute truncate command now it is working fine.

Lraghav
New Contributor II

Hi @YSDPrasad , could you please let me know which additonal jar files need to install to resolve this ?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group