<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Sharepoint Connector Site Limitation in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/155014#M54174</link>
    <description>&lt;P&gt;Hi Scott,&lt;/P&gt;
&lt;P&gt;Just asking our product team the quesiton. By the root level site do you mean content that is stored on the root level site? Or do you mean everything across your root tennant. ie you want to ingest all files across your tennant in a single pipeline? If its the root tennant then they will be building support for this in the managed connector hopeully launching next quarter.&lt;/P&gt;
&lt;P&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Emma&lt;/P&gt;</description>
    <pubDate>Tue, 21 Apr 2026 08:09:23 GMT</pubDate>
    <dc:creator>emma_s</dc:creator>
    <dc:date>2026-04-21T08:09:23Z</dc:date>
    <item>
      <title>Sharepoint Connector Site Limitation</title>
      <link>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/154828#M54146</link>
      <description>&lt;P&gt;Hey All!&lt;/P&gt;&lt;P&gt;We are trying out the Beta connector for &lt;A href="https://learn.microsoft.com/en-us/azure/databricks/ingestion/sharepoint" target="_blank" rel="noopener"&gt;SharePoint&lt;/A&gt; and found that the connector will not work at the root-level site.&amp;nbsp; Is there a reason for this limitation.&amp;nbsp; It is unfortunately a hard blocker for us to use the native connector.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;DIV class=""&gt;&lt;H2 id="must_start_with_sites"&gt;MUST_START_WITH_SITES&lt;/H2&gt;&lt;/DIV&gt;&lt;P&gt;Path must start with '/sites/' or '/teams/', got 'pathComponent'.&lt;BR /&gt;&lt;BR /&gt;Do anyone know if this is just a beta limitation and, if so, if it will be removed in the future?&lt;/P&gt;&lt;P&gt;Thanks,&lt;BR /&gt;Scott&lt;/P&gt;</description>
      <pubDate>Fri, 17 Apr 2026 16:12:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/154828#M54146</guid>
      <dc:creator>TX-Aggie-00</dc:creator>
      <dc:date>2026-04-17T16:12:04Z</dc:date>
    </item>
    <item>
      <title>Re: Sharepoint Connector Site Limitation</title>
      <link>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/154914#M54154</link>
      <description>&lt;P&gt;How you have made the connection, the reason I am asking because, We have two separete tenants (sharepoint in a separate tenant) and databricks setup in a differenet tenant, at the moment we are using the logic app to bring the data into the platform. I know we need to have a principal who have the access to the sharpoint tenant but in our case, IT teams are not allowing to grant the access which is required to pull the data and we are looking for the ways how to bring the data and what are the possible solutions around it. Thanks.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 20 Apr 2026 07:58:54 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/154914#M54154</guid>
      <dc:creator>saravjeet</dc:creator>
      <dc:date>2026-04-20T07:58:54Z</dc:date>
    </item>
    <item>
      <title>Re: Sharepoint Connector Site Limitation</title>
      <link>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/154960#M54161</link>
      <description>&lt;P&gt;Hey Saravjeet, we are currently working in the same tenant.&amp;nbsp; Not sure the connector has an issue with tenant to tenant but we had to revert to graph api's to sync.&amp;nbsp; We are still using logic apps to capture deletes and add those to a storage queue.&amp;nbsp; We have race condition logic to accommodate both processes and that would still be the case if we used the connector, since deletes are not captured (as far as I can tell).&lt;/P&gt;&lt;P&gt;Thanks&lt;BR /&gt;Scott&lt;/P&gt;</description>
      <pubDate>Mon, 20 Apr 2026 14:28:20 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/154960#M54161</guid>
      <dc:creator>TX-Aggie-00</dc:creator>
      <dc:date>2026-04-20T14:28:20Z</dc:date>
    </item>
    <item>
      <title>Re: Sharepoint Connector Site Limitation</title>
      <link>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/155014#M54174</link>
      <description>&lt;P&gt;Hi Scott,&lt;/P&gt;
&lt;P&gt;Just asking our product team the quesiton. By the root level site do you mean content that is stored on the root level site? Or do you mean everything across your root tennant. ie you want to ingest all files across your tennant in a single pipeline? If its the root tennant then they will be building support for this in the managed connector hopeully launching next quarter.&lt;/P&gt;
&lt;P&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Emma&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 08:09:23 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sharepoint-connector-site-limitation/m-p/155014#M54174</guid>
      <dc:creator>emma_s</dc:creator>
      <dc:date>2026-04-21T08:09:23Z</dc:date>
    </item>
  </channel>
</rss>

