cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

NoClassDefFoundError: scala/Product$class

YSDPrasad
New Contributor III

import com.microsoft.azure.sqldb.spark.config.Config

import com.microsoft.azure.sqldb.spark.connect._

import com.microsoft.azure.sqldb.spark.query._

val query = "Truncate table tablename"

val config = Config(Map(

 "url"     -> dbutils.secrets.get(scope = "azurekeyvault-scope", key = "DW-URL"),

 "databaseName" -> dbutils.secrets.get(scope = "azurekeyvault-scope", key = "DW-DBName"),

 "user"     -> dbutils.secrets.get(scope = "azurekeyvault-scope", key = "DW-Username"),

 "password"   -> dbutils.secrets.get(scope = "azurekeyvault-scope", key = "DW-Password"),

 "queryCustom" -> query

))

sqlContext.sqlDBQuery(config)

While executing the above command in cluster 12.0 runtime version i am facing an error like NoClassDefFoundError

1 ACCEPTED SOLUTION

Accepted Solutions

YSDPrasad
New Contributor III

Yes suteja by adding additional jar files I can resolve this issue but while running the read and write operation by connecting sqldb there I am facing error. I found different code to execute truncate command now it is working fine.

View solution in original post

4 REPLIES 4

Anonymous
Not applicable

@Someswara Durga Prasad Yaralgadda​ :

The NoClassDefFoundError error occurs when a class that was available during the compile time is not available at the runtime. This could be due to a few reasons, including a missing dependency or an incompatible version of a dependency.

In your case, it seems like the error is related to the Azure SQL DB Spark connector library. The missing class could be a part of this library. You can try the following steps to resolve the issue:

  1. Check that the necessary Azure SQL DB Spark connector library is available in your cluster's runtime environment. You can try to add the dependency explicitly using the %AddJar magic command.
  2. Make sure that the version of the Azure SQL DB Spark connector library matches the version that is compatible with your Spark cluster's runtime version.

YSDPrasad
New Contributor III

Yes suteja by adding additional jar files I can resolve this issue but while running the read and write operation by connecting sqldb there I am facing error. I found different code to execute truncate command now it is working fine.

Lraghav
New Contributor II

Hi @YSDPrasad , could you please let me know which additonal jar files need to install to resolve this ?

Kaniz
Community Manager
Community Manager

Hi @Someswara Durga Prasad Yaralgadda​ (Customer)​, We haven’t heard from you since the last response from @Suteja Kanuri​  (Customer)​, and I was checking back to see if her suggestions helped you.

Or else, If you have any solution, please share it with the community, as it can be helpful to others.

Also, Please don't forget to click on the "Select As Best" button whenever the information provided helps resolve your question.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.