cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

java.lang.NoClassDefFoundError: org/apache/spark/internal/Logging$class Error on writing a dataframe to a SQL DB in azure

Sha_1890
New Contributor III

I have to write the extracted data from XML to DB , i am using Dataframe for transformation and trying to load that to DB.

I have installed these libraries,

com.databricks:spark-xml_2.12:0.15.0

com.microsoft.azure:spark-mssql-connector_2.11_2.4:1.0.2

and my cell has the below code ,

df_Driver.write.format("com.microsoft.sqlserver.jdbc.spark").option("url", jdbcUrl).mode("overwrite").option("dbtable", "table_name").option("user", jdbcUsername).option("password", jdbcPassword).save()

but it throws an error as

java.lang.NoClassDefFoundError: org/apache/spark/internal/Logging$class

Py4JJavaError: An error occurred while calling o1055.save.

: java.lang.NoClassDefFoundError: org/apache/spark/internal/Logging$class

at com.microsoft.sqlserver.jdbc.spark.DefaultSource.<init>(DefaultSource.scala:32)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

at java.lang.Class.newInstance(Class.java:442)

at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:809)

at org.apache.spark.sql.DataFrameWriter.lookupV2Provider(DataFrameWriter.scala:983)

at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:293)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)

at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)

at py4j.Gateway.invoke(Gateway.java:295)

at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)

at py4j.commands.CallCommand.execute(CallCommand.java:79)

at py4j.GatewayConnection.run(GatewayConnection.java:251)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.ClassNotFoundException: org.apache.spark.internal.Logging$class

at java.net.URLClassLoader.findClass(URLClassLoader.java:382)

at java.lang.ClassLoader.loadClass(ClassLoader.java:419)

at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)

at java.lang.ClassLoader.loadClass(ClassLoader.java:352)

... 21 more

Please help me with this issue. TIA

5 REPLIES 5

Hubert-Dudek
Esteemed Contributor III

To write to Azure SQL you don't need to install the driver just use:

jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2};user={3};password={4}".format(jdbcHostname, jdbcPort, jdbcDatabase, username, password)
 
(df
.write
.jdbc(jdbcUrl, "table_name"))

Thanks, its working now. But I have got another issue that the data in dataframe save as a new table instead of inserting the data into existing table in DB.

df

.write

.format("jdbc")\

 .option("url", jdbcUrl)\

  .mode("overwrite")\

  .option("dbtable","Driver_DVCSD")\

  .option("user", jdbcUsername)\

  .option("password", jdbcPassword).save()

The dataframe hold the XML data and the table has audit columns created_date and created_by which is been dropped when the data is written from the dataframe to table.

Hubert-Dudek
Esteemed Contributor III

just add this audit columns as new standard ones in dataframe using .withColumn

Noopur_Nigam
Databricks Employee
Databricks Employee

Hi @shafana Roohi Jahubar​ Could you please check the table name in your write command? If it is the same, overwrite mode should overwrite that table.

Vidula
Honored Contributor

Hi @shafana Roohi Jahubar​ 

Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group