<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Clarification on Data Privacy with ai_query Models in Administration &amp; Architecture</title>
    <link>https://community.databricks.com/t5/administration-architecture/clarification-on-data-privacy-with-ai-query-models/m-p/128979#M3908</link>
    <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;We've had a client ask about the use of the Claude 3.7 Sonnet model (and others) in the Databricks SQL editor via the ai_query function. Specifically, they want to confirm whether any data passed to these models is ringfenced — i.e., not shared outside their environment and not used to train foundation models or improve services for other customers.&lt;/P&gt;&lt;P&gt;From the &lt;A href="https://learn.microsoft.com/en-us/azure/databricks/databricks-ai/databricks-ai-trust" target="_self"&gt;Databricks AI Trust documentation&lt;/A&gt;, it looks like models respect Unity Catalog permissions and do not share customer data externally. However, our client would appreciate official confirmation of Databricks' position on data isolation and privacy when using these models.&lt;/P&gt;&lt;P&gt;Could someone from Databricks confirm this?&lt;/P&gt;&lt;P&gt;I appreciate any help you can provide.&lt;/P&gt;</description>
    <pubDate>Wed, 20 Aug 2025 13:05:38 GMT</pubDate>
    <dc:creator>boitumelodikoko</dc:creator>
    <dc:date>2025-08-20T13:05:38Z</dc:date>
    <item>
      <title>Clarification on Data Privacy with ai_query Models</title>
      <link>https://community.databricks.com/t5/administration-architecture/clarification-on-data-privacy-with-ai-query-models/m-p/128979#M3908</link>
      <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;We've had a client ask about the use of the Claude 3.7 Sonnet model (and others) in the Databricks SQL editor via the ai_query function. Specifically, they want to confirm whether any data passed to these models is ringfenced — i.e., not shared outside their environment and not used to train foundation models or improve services for other customers.&lt;/P&gt;&lt;P&gt;From the &lt;A href="https://learn.microsoft.com/en-us/azure/databricks/databricks-ai/databricks-ai-trust" target="_self"&gt;Databricks AI Trust documentation&lt;/A&gt;, it looks like models respect Unity Catalog permissions and do not share customer data externally. However, our client would appreciate official confirmation of Databricks' position on data isolation and privacy when using these models.&lt;/P&gt;&lt;P&gt;Could someone from Databricks confirm this?&lt;/P&gt;&lt;P&gt;I appreciate any help you can provide.&lt;/P&gt;</description>
      <pubDate>Wed, 20 Aug 2025 13:05:38 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/clarification-on-data-privacy-with-ai-query-models/m-p/128979#M3908</guid>
      <dc:creator>boitumelodikoko</dc:creator>
      <dc:date>2025-08-20T13:05:38Z</dc:date>
    </item>
    <item>
      <title>Re: Clarification on Data Privacy with ai_query Models</title>
      <link>https://community.databricks.com/t5/administration-architecture/clarification-on-data-privacy-with-ai-query-models/m-p/128986#M3909</link>
      <description>&lt;P&gt;HI&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/99008"&gt;@boitumelodikoko&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;The documentation you've provided is &lt;STRONG&gt;official confirmation&lt;/STRONG&gt;&amp;nbsp;by Databricks (otherwise they wouldn't put it in public documentation in the first place). Every customer that uses ai functions within databricks should expect that any of following points is valid:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="szymon_dybczak_0-1755698068512.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/19213iA194D10FDF64851C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="szymon_dybczak_0-1755698068512.png" alt="szymon_dybczak_0-1755698068512.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;But one thing to consider, many of the AI features provided by Databricks are governed by the Partner-powered AI, i.e &lt;STRONG&gt;Azure Open AI&lt;/STRONG&gt;. So the question is if your client trust Microsoft or any other big corporation that they won't use their data for training models. In my opinion, I woudn't trust them on that, but this is my personal opinion &lt;span class="lia-unicode-emoji" title=":grinning_face_with_smiling_eyes:"&gt;😄&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 20 Aug 2025 13:57:13 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/clarification-on-data-privacy-with-ai-query-models/m-p/128986#M3909</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2025-08-20T13:57:13Z</dc:date>
    </item>
  </channel>
</rss>

