<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Why Spark Save Modes , &amp;quot;overwrite&amp;quot; always drops table although &amp;quot;truncate&amp;quot; is true ? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/12751#M7516</link>
    <description>&lt;P&gt;Hi @Akif Cakir​&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You are correct, this is the expected behavior when using JDBC connector. Docs &lt;A href="https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html" alt="https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html" target="_blank"&gt;here&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Have you try to use the "exasol" connector instead of JDBC? do you also get this same behavior?&lt;/P&gt;</description>
    <pubDate>Sat, 13 Nov 2021 00:35:59 GMT</pubDate>
    <dc:creator>jose_gonzalez</dc:creator>
    <dc:date>2021-11-13T00:35:59Z</dc:date>
    <item>
      <title>Why Spark Save Modes , "overwrite" always drops table although "truncate" is true ?</title>
      <link>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/12749#M7514</link>
      <description>&lt;P&gt;Hi Dear Team, &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am trying to import data from databricks to Exasol DB. &lt;/P&gt;&lt;P&gt;I am using following code in below with Spark version is 3.0.1&amp;nbsp;,&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;dfw.write \
    .format("jdbc") \
    .option("driver", exa_driver) \
    .option("url", exa_url) \
    .option("dbtable", "table") \
    .option("user", username) \
    .option("password", exa_password) \
    .option("truncate", "true") \
    .option("numPartitions", "1") \
    .option("fetchsize", "100000") \
    .mode("overwrite") \
    .save()&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The problem is when mode is "overwrite", it always drop target table in Exasol db, although in the spark documentation (&lt;A href="https://spark.apache.org/docs/3.0.1/sql-data-sources-jdbc.html#content" alt="https://spark.apache.org/docs/3.0.1/sql-data-sources-jdbc.html#content" target="_blank"&gt;https://spark.apache.org/docs/3.0.1/sql-data-sources-jdbc.html#content&lt;/A&gt;) it says for "truncate" option that&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;truncate --&amp;gt; This is a JDBC writer related option. When SaveMode.Overwrite is enabled, this option causes Spark to truncate an existing table instead of dropping and recreating it. This can be more efficient, and prevents the table metadata (e.g., indices) from being removed. However, it will not work in some cases, such as when the new data has a different schema. It defaults to false. This option applies only to writing.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;According to this explanation , I would expect with option("truncate", "true"), it should not drop but truncate the table. Nevertheless it drops the table even in that case. Note: we can have separate truncate command and go with append mode but I do not want to have extra second command but solve in one command as suggested in Exasol documentation here (&lt;A href="https://github.com/exasol/spark-exasol-connector/blob/main/doc/user_guide/user_guide.md#spark-save-modes" alt="https://github.com/exasol/spark-exasol-connector/blob/main/doc/user_guide/user_guide.md#spark-save-modes" target="_blank"&gt;https://github.com/exasol/spark-exasol-connector/blob/main/doc/user_guide/user_guide.md#spark-save-modes&lt;/A&gt;) as well. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Am I missing something or do you have any resolution ?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 21 Oct 2021 14:56:24 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/12749#M7514</guid>
      <dc:creator>AkifCakir</dc:creator>
      <dc:date>2021-10-21T14:56:24Z</dc:date>
    </item>
    <item>
      <title>Re: Why Spark Save Modes , "overwrite" always drops table although "truncate" is true ?</title>
      <link>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/12751#M7516</link>
      <description>&lt;P&gt;Hi @Akif Cakir​&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You are correct, this is the expected behavior when using JDBC connector. Docs &lt;A href="https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html" alt="https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html" target="_blank"&gt;here&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Have you try to use the "exasol" connector instead of JDBC? do you also get this same behavior?&lt;/P&gt;</description>
      <pubDate>Sat, 13 Nov 2021 00:35:59 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/12751#M7516</guid>
      <dc:creator>jose_gonzalez</dc:creator>
      <dc:date>2021-11-13T00:35:59Z</dc:date>
    </item>
    <item>
      <title>Re: Why Spark Save Modes , "overwrite" always drops table although "truncate" is true ?</title>
      <link>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/12752#M7517</link>
      <description>&lt;P&gt;Facing the same problem I used the following:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;sfOptions = {&lt;/P&gt;&lt;P&gt;  "sfURL" : "&amp;lt;account&amp;gt;.&lt;A href="http://snowflakecomputing.com/" alt="http://snowflakecomputing.com/" target="_blank"&gt;snowflakecomputing.com&lt;/A&gt;",&lt;/P&gt;&lt;P&gt;  "sfAccount" : "&amp;lt;account&amp;gt;",&lt;/P&gt;&lt;P&gt;  "sfUser" : "&amp;lt;user&amp;gt;",&lt;/P&gt;&lt;P&gt;  "sfPassword" : "***",&lt;/P&gt;&lt;P&gt;  "sfDatabase" : "&amp;lt;database&amp;gt;",&lt;/P&gt;&lt;P&gt;  "sfSchema" : "&amp;lt;schema&amp;gt;",&lt;/P&gt;&lt;P&gt;  "sfWarehouse" : "&amp;lt;warehouse&amp;gt;",&lt;/P&gt;&lt;P&gt;&lt;B&gt;  "truncate_table" : "ON",&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;  "usestagingtable" : "OFF",&lt;/B&gt;&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.snowflake.com/s/article/How-to-Load-Data-in-Spark-with-Overwrite-mode-without-Changing-table-Structure" target="test_blank"&gt;https://community.snowflake.com/s/article/How-to-Load-Data-in-Spark-with-Overwrite-mode-without-Changing-table-Structure&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Aug 2022 16:06:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/12752#M7517</guid>
      <dc:creator>mick042</dc:creator>
      <dc:date>2022-08-12T16:06:04Z</dc:date>
    </item>
    <item>
      <title>Re: Why Spark Save Modes , "overwrite" always drops table although "truncate" is</title>
      <link>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/54234#M30023</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/15685"&gt;@AkifCakir&lt;/a&gt;&amp;nbsp;, Were you able to find a way to truncate without dropping the table using the .write function as I am facing the same issue as well.&lt;/P&gt;</description>
      <pubDate>Wed, 29 Nov 2023 17:39:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/why-spark-save-modes-quot-overwrite-quot-always-drops-table/m-p/54234#M30023</guid>
      <dc:creator>Gembo</dc:creator>
      <dc:date>2023-11-29T17:39:00Z</dc:date>
    </item>
  </channel>
</rss>

