cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Is there a Databricks spark connector for java?

I-am-Biplab
New Contributor II

Is there a Databricks Spark connector for Java, just like we have for Snowflake (reference of Snowflake spark connector - https://docs.snowflake.com/en/user-guide/spark-connector-use)

Essentially, the use case is to transfer data from S3 to a Databricks table. In the current implementation, I am using Spark to read data from S3 and JDBC to write data to Databricks. But I want to use Spark instead to write data to Databricks.

4 REPLIES 4

BigRoux
Databricks Employee
Databricks Employee
Databricks does not offer a specific Spark connector for Java comparable to the Snowflake Spark connector mentioned in the provided URL. However, Databricks supports directly writing data to Databricks tables using Spark APIs. In your use case of transferring data from S3 to a Databricks table, you can achieve this fully using Spark without relying on JDBC.
Here’s a streamlined approach to replace the JDBC write operation with Spark-based writes: 1. Reading Data from S3: Use the Spark read function with the appropriate format based on your data (e.g., csv, parquet, etc.) and specify the S3 path. scala val data = spark.read.format("parquet").load("s3://bucket-name/folder-name") Ensure you configure your AWS credentials for accessing S3.
  1. Writing Data to Databricks Table: Use the Delta format or another supported format to write data directly to a Databricks table: scala data.write.format("delta").save("/mnt/databricks-table-path") If the table is pre-defined, you can use the saveAsTable method instead: scala data.write.format("delta").mode("overwrite").saveAsTable("database.table_name")
This approach eliminates the need for JDBC and integrates seamlessly with Databricks' native capabilities. However, if Java compatibility is an absolute requirement, these Spark APIs can still be invoked via Java bindings provided by Apache Spark. Concepts like DataStreamReader and DataStreamWriter in Java mirror their Scala equivalents.
 
Hope this helps, Lou.

I-am-Biplab
New Contributor II

Thanks @BigRoux 

Just wanted to clear my use case, I want to run the code locally, but the data insertion should happen in a remote Databricks workspace. I tried using JDBC, but it seems its performance is low in the case of write operations, even after adding batch size and number of partitions.

Is there any alternative for my use case? Also, I am using Java for the current implementation.

BigRoux
Databricks Employee
Databricks Employee

We have native connectivity with VSCode. Check it out here: https://docs.databricks.com/aws/en/dev-tools/vscode-ext/

You may also want to dig into Databricks Connect. Check it out here: https://docs.databricks.com/aws/en/release-notes/dbconnect/

sandeepmankikar
Contributor

You don't need a separate Spark connector ,Databricks natively supports writing to Delta tables using standard Spark APIs. Instead of using JDBC, you can use df.write().format("delta") to efficiently write data from S3 to Databricks tables.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now