<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to restart snowflake connector? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-restart-snowflake-connector/m-p/9049#M4544</link>
    <description>&lt;P&gt;Yes, that would work. However, it is a longish Snowflake query producing a number of tables that are all called by the Databricks notebook, so it requires quite a few changes. I'll use this alternative if I automate the process. &lt;/P&gt;&lt;P&gt;However, I think this is a serious issue that deserves a warning from Databricks when using snowflake connector. One implicitly trusts that the connection will work, and there is no reason programmers will limit their snowflake changes to the particular ongoing connection. &lt;/P&gt;&lt;P&gt;In any case, under the hood, I imagine a connection engine has been created that could be closed and reopened. Maybe one could access that engine with standard snowflake sqlalchemy commands from the notebook?&lt;/P&gt;</description>
    <pubDate>Sat, 25 Feb 2023 00:51:40 GMT</pubDate>
    <dc:creator>DavidMayer-Foul</dc:creator>
    <dc:date>2023-02-25T00:51:40Z</dc:date>
    <item>
      <title>How to restart snowflake connector?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-restart-snowflake-connector/m-p/9047#M4542</link>
      <description>&lt;P&gt;After using spark.read.format("snowflake").options(**options).option("dbtable", "table_name").load() to read a table from Snowflake, when I then change the table from Snowflake and read it again, it gives me the first version of the table. I have worked around the problem by restarting the cluster. Is there a better way? Maybe restarting the snowflake connector or configuring it differently?&lt;/P&gt;</description>
      <pubDate>Tue, 21 Feb 2023 02:51:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-restart-snowflake-connector/m-p/9047#M4542</guid>
      <dc:creator>DavidMayer-Foul</dc:creator>
      <dc:date>2023-02-21T02:51:16Z</dc:date>
    </item>
    <item>
      <title>Re: How to restart snowflake connector?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-restart-snowflake-connector/m-p/9049#M4544</link>
      <description>&lt;P&gt;Yes, that would work. However, it is a longish Snowflake query producing a number of tables that are all called by the Databricks notebook, so it requires quite a few changes. I'll use this alternative if I automate the process. &lt;/P&gt;&lt;P&gt;However, I think this is a serious issue that deserves a warning from Databricks when using snowflake connector. One implicitly trusts that the connection will work, and there is no reason programmers will limit their snowflake changes to the particular ongoing connection. &lt;/P&gt;&lt;P&gt;In any case, under the hood, I imagine a connection engine has been created that could be closed and reopened. Maybe one could access that engine with standard snowflake sqlalchemy commands from the notebook?&lt;/P&gt;</description>
      <pubDate>Sat, 25 Feb 2023 00:51:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-restart-snowflake-connector/m-p/9049#M4544</guid>
      <dc:creator>DavidMayer-Foul</dc:creator>
      <dc:date>2023-02-25T00:51:40Z</dc:date>
    </item>
  </channel>
</rss>

