cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Changing StreamingWrite API in DBR 13.1 may lead to incompatibility with Spark 3.4

lpf
New Contributor

I'm using StarRocks Connector[2] to ingest data to StarRocks on DataBricks 13.1 (powered by Spark 3.4.0). The connector could run on community Spark 3.4, but fail on the DBR. The reason is (the full stack trace is attached)

java.lang.IncompatibleClassChangeError: Conflicting default methods: org/apache/spark/sql/connector/write/BatchWrite.useCommitCoordinator org/apache/spark/sql/connector/write/streaming/StreamingWrite.useCommitCoordinator
at com.starrocks.connector.spark.sql.write.StarRocksWrite.useCommitCoordinator(StarRocksWrite.java)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:431)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:416)

StarRockWrite implements both interface `BatchWrite` and `StreamingWrite`, and does not override default method `useCommitCoordinator` because only `BatchWrite` has the method in Spark 3.4. From the release note of DBR 13.1, I find it introduces an improvement https://issues.apache.org/jira/browse/SPARK-42968 which also adds the default method `useCommitCoordinator` to `StreamingWrite`, and there **bleep** be default method conflict if StarRockWrite does not override the method. The improvement is introduced since Spark 3.5, but DBR 13.1 (powered by Spark 3.4.0) introduces it in advance, and that'**bleep** why the connector works well on community spark, and fail on DBR.
Is it a compatibility issue between community spark and DBR? What is the right way to fix it? 

[1] DBR 13.1 release note 

[2] StarRocks Connector

 

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager

Hi @lpfBased on the information provided, there seems to be a compatibility issue between the StarRocks Connector and Databricks Runtime 13.1 (powered by Spark 3.4.0). The problem arises because the StarRocksWrite class implements both the BatchWrite and StreamingWrite interfaces but does not override the default method useCommitCoordinator. This conflicts with the default method introduced in DBR 13.1, which adds the useCommitCoordinator method to the StreamingWrite interface.

To fix this compatibility issue, you can try the following steps:

1. Upgrade to a newer version of Databricks Runtime compatible with the StarRocks Connector. Since the improvement that introduces the useCommitCoordinator method to StreamingWrite is introduced in Spark 3.5, you can try upgrading to a Databricks Runtime version based on Spark 3.5 or higher.

2. Alternatively, you can modify the StarRocksWrite class to override the useCommitCoordinator method from both the BatchWrite and StreamingWrite interfaces. This will resolve the conflict and make the connector compatible with DBR 13.1. 

Please note that the specific steps to upgrade Databricks Runtime or modify the StarRocksWrite class may depend on your environment's specific setup and configuration.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group