<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Databricks JDBC &amp; Remote Write in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29540#M21263</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm trying to write to a Delta Table in my Databricks instance from a remote Spark session on a different cluster with the Simba Spark driver.  I can do reads, but when I attempt to do a write, I get the following error:&lt;/P&gt;&lt;P&gt;{&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;df.write.format("jdbc").mode(SaveMode.Append).options(Map(&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"url" -&amp;gt; "jdbc:spark://adb-&amp;lt;host_id&amp;gt;.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=&amp;lt;http_path&amp;gt;;AuthMech=3;UID=token;PWD=&amp;lt;token&amp;gt;",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"dbtable" -&amp;gt; "testtable",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"driver" -&amp;gt; "com.simba.spark.jdbc.Driver"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;)).save()&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;java.sql.SQLFeatureNotSupportedException: [Simba][JDBC](10220) Driver does not support this optional feature.&lt;/P&gt;&lt;P&gt;at com.simba.spark.exceptions.ExceptionConverter.toSQLException(Unknown Source)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at com.simba.spark.jdbc.common.SPreparedStatement.checkTypeSupported(Unknown Source)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at com.simba.spark.jdbc.common.SPreparedStatement.setNull(Unknown Source)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:677)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1(JdbcUtils.scala:856)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1$adapted(JdbcUtils.scala:854)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1020)&lt;/P&gt;&lt;P&gt;......&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm currently using my "Data Science &amp;amp; Engineering" section's cluster to do the connection, where in the Advanced section there are the details to connect via JDBC/ODBC.  Some guides indicate to use a SQL Endpoint for this, and that might be my problem, but I do not have permissions to create one at this time.  Some posts around, like on StackOverflow, indicate it's an issue with the autocommit feature and that is not supported by the Simba Spark driver, but I'm unsure and I couldn't find a Spark or driver option that indicated to turn that off.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, all the documentation for doing spark.writes seem to be for when operating in a notebook instance on the Databricks server and no remote connection examples using the driver.  Am I missing where a documentation page for that would be?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Remote Spark Instance&lt;/P&gt;&lt;P&gt;-------------------------------&lt;/P&gt;&lt;P&gt;Spark Version: 3.1.1&lt;/P&gt;&lt;P&gt;Scala Version: 2.1210&lt;/P&gt;&lt;P&gt;Spark Simba JDBC Driver from Databricks: 2.6.22&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Databricks Cluster Settings&lt;/P&gt;&lt;P&gt;---------------------&lt;/P&gt;&lt;P&gt;Cloud System: Azure&lt;/P&gt;&lt;P&gt;Policy: Unrestricted&lt;/P&gt;&lt;P&gt;Cluster Mode: Standard&lt;/P&gt;&lt;P&gt;Autoscaling: Enabled&lt;/P&gt;&lt;P&gt;Databricks Runtime Version: 9.1 LTS (includes Apache 3.1.2, Scala 2.12)&lt;/P&gt;&lt;P&gt;Worker &amp;amp; Driver Type: Standard_DS3_v2&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please let me know if you need any other information to help me address my issue.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;&lt;P&gt;Kai&lt;/P&gt;</description>
    <pubDate>Fri, 04 Feb 2022 20:07:41 GMT</pubDate>
    <dc:creator>Optum</dc:creator>
    <dc:date>2022-02-04T20:07:41Z</dc:date>
    <item>
      <title>Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29540#M21263</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm trying to write to a Delta Table in my Databricks instance from a remote Spark session on a different cluster with the Simba Spark driver.  I can do reads, but when I attempt to do a write, I get the following error:&lt;/P&gt;&lt;P&gt;{&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;df.write.format("jdbc").mode(SaveMode.Append).options(Map(&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"url" -&amp;gt; "jdbc:spark://adb-&amp;lt;host_id&amp;gt;.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=&amp;lt;http_path&amp;gt;;AuthMech=3;UID=token;PWD=&amp;lt;token&amp;gt;",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"dbtable" -&amp;gt; "testtable",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"driver" -&amp;gt; "com.simba.spark.jdbc.Driver"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;)).save()&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;java.sql.SQLFeatureNotSupportedException: [Simba][JDBC](10220) Driver does not support this optional feature.&lt;/P&gt;&lt;P&gt;at com.simba.spark.exceptions.ExceptionConverter.toSQLException(Unknown Source)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at com.simba.spark.jdbc.common.SPreparedStatement.checkTypeSupported(Unknown Source)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at com.simba.spark.jdbc.common.SPreparedStatement.setNull(Unknown Source)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:677)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1(JdbcUtils.scala:856)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1$adapted(JdbcUtils.scala:854)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1020)&lt;/P&gt;&lt;P&gt;......&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm currently using my "Data Science &amp;amp; Engineering" section's cluster to do the connection, where in the Advanced section there are the details to connect via JDBC/ODBC.  Some guides indicate to use a SQL Endpoint for this, and that might be my problem, but I do not have permissions to create one at this time.  Some posts around, like on StackOverflow, indicate it's an issue with the autocommit feature and that is not supported by the Simba Spark driver, but I'm unsure and I couldn't find a Spark or driver option that indicated to turn that off.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, all the documentation for doing spark.writes seem to be for when operating in a notebook instance on the Databricks server and no remote connection examples using the driver.  Am I missing where a documentation page for that would be?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Remote Spark Instance&lt;/P&gt;&lt;P&gt;-------------------------------&lt;/P&gt;&lt;P&gt;Spark Version: 3.1.1&lt;/P&gt;&lt;P&gt;Scala Version: 2.1210&lt;/P&gt;&lt;P&gt;Spark Simba JDBC Driver from Databricks: 2.6.22&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Databricks Cluster Settings&lt;/P&gt;&lt;P&gt;---------------------&lt;/P&gt;&lt;P&gt;Cloud System: Azure&lt;/P&gt;&lt;P&gt;Policy: Unrestricted&lt;/P&gt;&lt;P&gt;Cluster Mode: Standard&lt;/P&gt;&lt;P&gt;Autoscaling: Enabled&lt;/P&gt;&lt;P&gt;Databricks Runtime Version: 9.1 LTS (includes Apache 3.1.2, Scala 2.12)&lt;/P&gt;&lt;P&gt;Worker &amp;amp; Driver Type: Standard_DS3_v2&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please let me know if you need any other information to help me address my issue.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;&lt;P&gt;Kai&lt;/P&gt;</description>
      <pubDate>Fri, 04 Feb 2022 20:07:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29540#M21263</guid>
      <dc:creator>Optum</dc:creator>
      <dc:date>2022-02-04T20:07:41Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29542#M21265</link>
      <description>&lt;P&gt;Hi Kaniz,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any update on a resolution? I am also experiencing the same issue with not being able to write to my Databricks table but I am able to read using a JDBC driver. &lt;/P&gt;</description>
      <pubDate>Thu, 10 Feb 2022 15:42:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29542#M21265</guid>
      <dc:creator>dblevins</dc:creator>
      <dc:date>2022-02-10T15:42:40Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29544#M21267</link>
      <description>&lt;P&gt;It's not clear to me how to set autocommit off/false.  Is this a setting I'm supposed to make in my Spark configurations when I launch it in either -shell/-submit?  Or is there some standard JDBC config file somewhere that gets referenced for these kinds of settings being turned off?  I ask because the only documentation I see related to turning autocommit off is directly with the java.sql.Connection class, but I am trying to write using df.write.format("jdbc").options(Map( ... )).save() and it throws an error if you attempt to set an autocommit option.  I also do not see an autocommit-related option in the Simba JBDC documentation where it has the list of available driver options that can be set via the options(Map( ... )) call.  I can't use a Connection object as that only passes direct SQL commands and won't write out a DataFrame; I could loop it and inject the inserts, but that will be too clunky and slow.&lt;/P&gt;</description>
      <pubDate>Wed, 16 Feb 2022 21:27:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29544#M21267</guid>
      <dc:creator>Optum</dc:creator>
      <dc:date>2022-02-16T21:27:25Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29545#M21268</link>
      <description>&lt;P&gt;@Kai McNeely​&amp;nbsp; Could you provide me more details on your client ? Such as which version of spark you run your code and do you provide extra spark conf other than the default one ? I tested use the spark 3.x with default spark conf and it works for me. &lt;/P&gt;</description>
      <pubDate>Thu, 17 Feb 2022 18:17:51 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29545#M21268</guid>
      <dc:creator>User16752239289</dc:creator>
      <dc:date>2022-02-17T18:17:51Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29546#M21269</link>
      <description>&lt;P&gt;I am using Spark 3.1.2 (listed 3.1.1 in initial post, deployment originally listed 3.1.1) initializing with: spark-shell spark-shell -target:jvm-1.8&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am bringing in the following packages (some used and unused at the moment):&lt;/P&gt;&lt;P&gt;--conf "spark.driver.extraClassPath=~/jars/SparkJDBC42.jar" \&lt;/P&gt;&lt;P&gt;--conf "spark.jars.repositories=path/to/repos" \&lt;/P&gt;&lt;P&gt;--packages "com.microsoft.sqlserver:mssql-jdbc:6.4.0.jre8,net.sourceforge.jtds:jtds:1.3.1,com.oracle.database.jdbc:ojdbc8:21.1.0.0,com.databricks:spark-avro_2.10:4.0.0"&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Following are the de-identified setting that come implicitly from my /opt/spark/conf/spark-default.conf:&lt;/P&gt;&lt;P&gt;spark.master=k8s://https://kubernetes.default.svc.cluster.local:443&lt;/P&gt;&lt;P&gt;spark.app.name=sp-worker&lt;/P&gt;&lt;P&gt;spark.eventLog.enabled true&lt;/P&gt;&lt;P&gt;spark.eventLog.dir file:///spark/logs&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;spark.kubernetes.driver.limit.cores=2&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.limit.cores=4&lt;/P&gt;&lt;P&gt;spark.executor.instances=2&lt;/P&gt;&lt;P&gt;spark.executor.memory=4G&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.limit.cores=4&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.request.cores=250m&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;spark.kubernetes.driver.label.app=spark-jlab&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.label.app=spark-jlab&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.label.deployment=jlab&lt;/P&gt;&lt;P&gt;spark.kubernetes.local.dirs.tmpfs=true&lt;/P&gt;&lt;P&gt;spark.kubernetes.container.image.pullPolicy=IfNotPresent&amp;nbsp;&lt;/P&gt;&lt;P&gt;spark.kubernetes.container.image=docker.someurl.com/path/to/image/repo&lt;/P&gt;&lt;P&gt;# Home for Python libraries&lt;/P&gt;&lt;P&gt;spark.kubernetes.driver.volumes.persistentVolumeClaim.home.mount.path=/home&lt;/P&gt;&lt;P&gt;spark.kubernetes.driver.volumes.persistentVolumeClaim.home.mount.readOnly=false&lt;/P&gt;&lt;P&gt;spark.kubernetes.driver.volumes.persistentVolumeClaim.home.options.claimName=jlab&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.volumes.persistentVolumeClaim.home.mount.path=/home&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.volumes.persistentVolumeClaim.home.mount.readOnly=false&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.volumes.persistentVolumeClaim.home.options.claimName=jlab&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;# #Logs&lt;/P&gt;&lt;P&gt;spark.kubernetes.driver.volumes.persistentVolumeClaim.logs.mount.path=/logs&lt;/P&gt;&lt;P&gt;spark.kubernetes.driver.volumes.persistentVolumeClaim.logs.mount.readOnly=false&lt;/P&gt;&lt;P&gt;spark.kubernetes.driver.volumes.persistentVolumeClaim.logs.options.claimName=k8s-sparkhistorylogs&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.volumes.persistentVolumeClaim.logs.mount.path=/logs&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.volumes.persistentVolumeClaim.logs.mount.readOnly=false&lt;/P&gt;&lt;P&gt;spark.kubernetes.executor.volumes.persistentVolumeClaim.logs.options.claimName=k8s-sparkhistorylogs&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;spark.kubernetes.namespace=namespace-00000&lt;/P&gt;&lt;P&gt;spark.kubernetes.authenticate.driver.serviceAccountName=svcactnm-00000&lt;/P&gt;&lt;P&gt;spark.kubernetes.authenticate.oauthToken=token&lt;/P&gt;&lt;P&gt;spark.kubernetes.authenticate.caCertFile=/opt/osficerts/ctc.crt&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;spark.ui.xContentTypeOptions.enabled=true&lt;/P&gt;&lt;P&gt;spark.kubernetes.pyspark.pythonVersion=3&lt;/P&gt;</description>
      <pubDate>Thu, 17 Feb 2022 19:56:51 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29546#M21269</guid>
      <dc:creator>Optum</dc:creator>
      <dc:date>2022-02-17T19:56:51Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29547#M21270</link>
      <description>&lt;P&gt;Could you try setting the flag to ignore transactions? I’m not sure what the exact flag is, but there should be more details in the JDBC manual on how to do this&lt;/P&gt;</description>
      <pubDate>Wed, 16 Mar 2022 05:24:51 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29547#M21270</guid>
      <dc:creator>Atanu</dc:creator>
      <dc:date>2022-03-16T05:24:51Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29548#M21271</link>
      <description>&lt;P&gt;Hi @Kai McNeely​&amp;nbsp;, were you able to resolve this? I am also facing the same issue using exact same spark and JDBC driver versions.&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jan 2023 05:35:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29548#M21271</guid>
      <dc:creator>pulkitm</dc:creator>
      <dc:date>2023-01-10T05:35:05Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29549#M21272</link>
      <description>&lt;P&gt;@Kaniz Fatma​&amp;nbsp;The resolution is to add the JDBC driver's class name to the list of configured drivers which have autoCommit turned off --- How can we manage this list on a Windows machine? can you summarize the steps. It would be really appreciated&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jan 2023 07:37:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29549#M21272</guid>
      <dc:creator>pulkitm</dc:creator>
      <dc:date>2023-01-10T07:37:39Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29550#M21273</link>
      <description>&lt;P&gt;I tried by setting IgnoreTransactions flag value to 1, but it still didn't resolve the issue&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jan 2023 07:42:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/29550#M21273</guid>
      <dc:creator>pulkitm</dc:creator>
      <dc:date>2023-01-10T07:42:02Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC &amp; Remote Write</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/118726#M45697</link>
      <description>&lt;P&gt;Any update on the issue?&lt;/P&gt;</description>
      <pubDate>Sat, 10 May 2025 01:03:07 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-remote-write/m-p/118726#M45697</guid>
      <dc:creator>RoK1</dc:creator>
      <dc:date>2025-05-10T01:03:07Z</dc:date>
    </item>
  </channel>
</rss>

