<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Is there a Sample Java Program using Databricks Connect Library to query a table In the Free Edi in Warehousing &amp; Analytics</title>
    <link>https://community.databricks.com/t5/warehousing-analytics/is-there-a-sample-java-program-using-databricks-connect-library/m-p/153725#M2553</link>
    <description>&lt;P&gt;&lt;SPAN&gt;Hi — welcome to Databricks! Unfortunately, &lt;/SPAN&gt;&lt;STRONG&gt;Databricks Connect v2 (DBR 13.3+) does not support Java&lt;/STRONG&gt;&lt;SPAN&gt; — it only supports Python, Scala, and R. The legacy v1 did support Java, but it's been deprecated and is end-of-support.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;That said, here are your options as a Java developer:&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 1: Use Scala with Databricks Connect (JVM interop)&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;Since Scala runs on the JVM, you can call the Databricks Connect Scala APIs from Java. This gives you full DataFrame read/write support:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Scala — callable from Java via JVM interop&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import com.databricks.connect.DatabricksSession&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import org.apache.spark.sql.types._&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val spark = DatabricksSession.builder().getOrCreate()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Write a DataFrame to a table&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val df = spark.read.table("samples.nyctaxi.trips")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;df.limit(5).show()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Create and write your own DataFrame&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val schema = StructType(Seq(&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;StructField("id", IntegerType, false),&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;StructField("name", StringType, false)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;))&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val data = Seq(Row(1, "Alice"), Row(2, "Bob"))&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;df.write.saveAsTable("my_catalog.my_schema.my_table")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Add the Maven dependency:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;lt;dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;groupId&amp;gt;com.databricks&amp;lt;/groupId&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;artifactId&amp;gt;databricks-connect&amp;lt;/artifactId&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;version&amp;gt;15.4.0&amp;lt;/version&amp;gt; &amp;lt;!-- match your DBR version --&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;lt;/dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;See: &lt;/SPAN&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/databricks-connect/scala/examples" target="_blank"&gt;&lt;SPAN&gt;Databricks Connect Scala Examples&lt;/SPAN&gt;&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 2: Databricks SDK for Java + SQL (Pure Java, no Spark dependency)&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;If you want to stay in pure Java, the &lt;/SPAN&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/sdk-java" target="_blank"&gt;&lt;SPAN&gt;Databricks SDK for Java&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt; lets you:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Upload parquet files&lt;/STRONG&gt;&lt;SPAN&gt; to Unity Catalog Volumes via the Files API&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Execute SQL&lt;/STRONG&gt;&lt;SPAN&gt; via the Statement Execution API to register/query tables&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;This is closer to the Iceberg pattern you described (write files, then register):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import com.databricks.sdk.WorkspaceClient;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;WorkspaceClient w = new WorkspaceClient();&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Upload a parquet file to a Volume&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;w.files().upload("/Volumes/my_catalog/my_schema/my_volume/data.parquet", inputStream);&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Then run SQL to create a table from the file&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// (via Statement Execution API or JDBC for the SQL part)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Maven dependency:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;lt;dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;groupId&amp;gt;com.databricks&amp;lt;/groupId&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;artifactId&amp;gt;databricks-sdk-java&amp;lt;/artifactId&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;version&amp;gt;0.2.0&amp;lt;/version&amp;gt; &amp;lt;!-- use latest from Maven Central --&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;lt;/dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 3: JDBC with Bulk Ingestion&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;I know you want to avoid JDBC, but it's worth noting that Databricks JDBC supports &lt;/SPAN&gt;&lt;STRONG&gt;Arrow-based bulk ingestion&lt;/STRONG&gt;&lt;SPAN&gt; which significantly reduces the overhead compared to traditional row-by-row JDBC inserts. It may be faster than you expect.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;A Note on Free Edition&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;Databricks Connect requires a cluster or serverless compute with Spark Connect enabled. The Free Edition (Community Edition) has limited compute options, so Databricks Connect may not work there. The SDK + SQL approach (Option 2) or JDBC (Option 3) are more likely to work on the free tier.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Docs:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/databricks-connect/" target="_blank"&gt;&lt;SPAN&gt;Databricks Connect&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/databricks-connect/scala/examples" target="_blank"&gt;&lt;SPAN&gt;Databricks Connect Scala Examples&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/sdk-java" target="_blank"&gt;&lt;SPAN&gt;Databricks SDK for Java&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;Hope that helps point you in the right direction!&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 08 Apr 2026 11:04:19 GMT</pubDate>
    <dc:creator>anuj_lathi</dc:creator>
    <dc:date>2026-04-08T11:04:19Z</dc:date>
    <item>
      <title>Is there a Sample Java Program using Databricks Connect Library to query a table In the Free Editio?</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/is-there-a-sample-java-program-using-databricks-connect-library/m-p/153650#M2551</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; I was wondering if there was sample code indicating how a java program might leverage Databricks Connect to query the table in the Free Edition of Databricks?&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp;I would like to use Connect as I am trying to avoid JDBC and its overhead and thought I might do better by creating dataframes and then leveraging connect to write them to databricks in parquet based files.&amp;nbsp; I note that Databricks connect claims to support Java in some places but the documentation focuses on... Python, R and Scala.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/databricks-connect/" target="_blank"&gt;https://docs.databricks.com/aws/en/dev-tools/databricks-connect/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; I saw there use to be a standalone... which is what I believe I wanted, but it looks like it is to be deprecated.&lt;BR /&gt;&amp;nbsp; I am new to databricks and its concepts but am familiar with ICEBERG (in which I'd simply use the Iceberg Java API's leveraging a file appender and then the Catalog API to register my parquet files with the manifest).&amp;nbsp; &amp;nbsp;What is the equivalent here to write directly out parquet in parallel and then register?&amp;nbsp; (presumably leveraging their spark compute to do it)&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2026 19:24:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/is-there-a-sample-java-program-using-databricks-connect-library/m-p/153650#M2551</guid>
      <dc:creator>ShawnRR</dc:creator>
      <dc:date>2026-04-07T19:24:11Z</dc:date>
    </item>
    <item>
      <title>Re: Is there a Sample Java Program using Databricks Connect Library to query a table In the Free Edi</title>
      <link>https://community.databricks.com/t5/warehousing-analytics/is-there-a-sample-java-program-using-databricks-connect-library/m-p/153725#M2553</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi — welcome to Databricks! Unfortunately, &lt;/SPAN&gt;&lt;STRONG&gt;Databricks Connect v2 (DBR 13.3+) does not support Java&lt;/STRONG&gt;&lt;SPAN&gt; — it only supports Python, Scala, and R. The legacy v1 did support Java, but it's been deprecated and is end-of-support.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;That said, here are your options as a Java developer:&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 1: Use Scala with Databricks Connect (JVM interop)&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;Since Scala runs on the JVM, you can call the Databricks Connect Scala APIs from Java. This gives you full DataFrame read/write support:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Scala — callable from Java via JVM interop&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import com.databricks.connect.DatabricksSession&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import org.apache.spark.sql.types._&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val spark = DatabricksSession.builder().getOrCreate()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Write a DataFrame to a table&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val df = spark.read.table("samples.nyctaxi.trips")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;df.limit(5).show()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Create and write your own DataFrame&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val schema = StructType(Seq(&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;StructField("id", IntegerType, false),&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;StructField("name", StringType, false)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;))&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val data = Seq(Row(1, "Alice"), Row(2, "Bob"))&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;val df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;df.write.saveAsTable("my_catalog.my_schema.my_table")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Add the Maven dependency:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;lt;dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;groupId&amp;gt;com.databricks&amp;lt;/groupId&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;artifactId&amp;gt;databricks-connect&amp;lt;/artifactId&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;version&amp;gt;15.4.0&amp;lt;/version&amp;gt; &amp;lt;!-- match your DBR version --&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;lt;/dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;See: &lt;/SPAN&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/databricks-connect/scala/examples" target="_blank"&gt;&lt;SPAN&gt;Databricks Connect Scala Examples&lt;/SPAN&gt;&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 2: Databricks SDK for Java + SQL (Pure Java, no Spark dependency)&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;If you want to stay in pure Java, the &lt;/SPAN&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/sdk-java" target="_blank"&gt;&lt;SPAN&gt;Databricks SDK for Java&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt; lets you:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Upload parquet files&lt;/STRONG&gt;&lt;SPAN&gt; to Unity Catalog Volumes via the Files API&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Execute SQL&lt;/STRONG&gt;&lt;SPAN&gt; via the Statement Execution API to register/query tables&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;This is closer to the Iceberg pattern you described (write files, then register):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import com.databricks.sdk.WorkspaceClient;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;WorkspaceClient w = new WorkspaceClient();&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Upload a parquet file to a Volume&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;w.files().upload("/Volumes/my_catalog/my_schema/my_volume/data.parquet", inputStream);&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// Then run SQL to create a table from the file&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;// (via Statement Execution API or JDBC for the SQL part)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Maven dependency:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;lt;dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;groupId&amp;gt;com.databricks&amp;lt;/groupId&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;artifactId&amp;gt;databricks-sdk-java&amp;lt;/artifactId&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;lt;version&amp;gt;0.2.0&amp;lt;/version&amp;gt; &amp;lt;!-- use latest from Maven Central --&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;lt;/dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 3: JDBC with Bulk Ingestion&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;I know you want to avoid JDBC, but it's worth noting that Databricks JDBC supports &lt;/SPAN&gt;&lt;STRONG&gt;Arrow-based bulk ingestion&lt;/STRONG&gt;&lt;SPAN&gt; which significantly reduces the overhead compared to traditional row-by-row JDBC inserts. It may be faster than you expect.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;A Note on Free Edition&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;Databricks Connect requires a cluster or serverless compute with Spark Connect enabled. The Free Edition (Community Edition) has limited compute options, so Databricks Connect may not work there. The SDK + SQL approach (Option 2) or JDBC (Option 3) are more likely to work on the free tier.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Docs:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/databricks-connect/" target="_blank"&gt;&lt;SPAN&gt;Databricks Connect&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/databricks-connect/scala/examples" target="_blank"&gt;&lt;SPAN&gt;Databricks Connect Scala Examples&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;A href="https://docs.databricks.com/aws/en/dev-tools/sdk-java" target="_blank"&gt;&lt;SPAN&gt;Databricks SDK for Java&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;Hope that helps point you in the right direction!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2026 11:04:19 GMT</pubDate>
      <guid>https://community.databricks.com/t5/warehousing-analytics/is-there-a-sample-java-program-using-databricks-connect-library/m-p/153725#M2553</guid>
      <dc:creator>anuj_lathi</dc:creator>
      <dc:date>2026-04-08T11:04:19Z</dc:date>
    </item>
  </channel>
</rss>

