<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Vector Index Creation for external embedding model takes a lot of time in Machine Learning</title>
    <link>https://community.databricks.com/t5/machine-learning/vector-index-creation-for-external-embedding-model-takes-a-lot/m-p/137947#M4411</link>
    <description>&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;The main reason your Hugging Face embedding model endpoint is taking much longer than Databricks’ own large_bge_en model to build a vector search index is likely due to differences in operational architecture and performance optimizations between external custom endpoints and native Databricks-managed models.&lt;/P&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Key Factors Impacting Index Creation Time&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;API/Network Overhead&lt;/STRONG&gt;: Using an external model (even if Hugging Face-hosted) involves network latency for every embedding call, which adds significant overhead, especially for large-scale batch operations.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Endpoint Scaling and Cold Starts&lt;/STRONG&gt;: If your Hugging Face endpoint is set to scale to zero when idle, cold starts can add minutes to your first requests. Databricks managed models are optimized to avoid such cold start penalties.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Batching and Throughput&lt;/STRONG&gt;: Databricks models are tightly integrated and can leverage optimized hardware accelerators, efficient batching, and parallelization. Hugging Face endpoints may have lower throughput limits, especially on public or lightly provisioned infrastructure.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Embedding Dimension Checks and Data Structure&lt;/STRONG&gt;: Mismatches between the embedding size your model outputs and what the index expects can cause extra validation or conversion work, slowing the indexing pipeline.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Serialization and Format&lt;/STRONG&gt;: If your external endpoint returns embeddings in a different format or requires additional deserialization, this can also introduce latency compared to Databricks’ direct-integration models.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Best Practices and Suggestions&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Precompute Embeddings&lt;/STRONG&gt;: Rather than calling the external endpoint live during indexing, precompute and store embeddings for your dataset, then build the index from this static data (self-managed embeddings). This is the fastest approach and is the method Databricks benchmarks rely on.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Optimize Endpoint Provisioning&lt;/STRONG&gt;: Ensure your Hugging Face endpoint has adequate resources and does not scale to zero. If possible, provision for high concurrency and throughput to reduce latency.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Batch Requests&lt;/STRONG&gt;: If your endpoint supports batching, maximize batch sizes to reduce per-request overhead and make more efficient use of resources.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Monitor and Benchmark&lt;/STRONG&gt;: Regularly profile the performance of both embedding generation and index building. Look for bottlenecks in network, serialization, or dimension mismatches.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Consider Edge Models or Hosting&lt;/STRONG&gt;: When feasible, host the embedding model closer to your data, perhaps within Databricks itself, so you have greater control and minimize network latency.​&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;In summary, the main bottleneck is the extra latency introduced by the external Hugging Face endpoint, which is avoided by Databricks’ optimized, tightly integrated offering. Moving to a precomputed/self-managed embedding workflow and tuning your endpoint can dramatically improve performance.​&lt;/P&gt;</description>
    <pubDate>Thu, 06 Nov 2025 11:53:39 GMT</pubDate>
    <dc:creator>mark_ott</dc:creator>
    <dc:date>2025-11-06T11:53:39Z</dc:date>
    <item>
      <title>Vector Index Creation for external embedding model takes a lot of time</title>
      <link>https://community.databricks.com/t5/machine-learning/vector-index-creation-for-external-embedding-model-takes-a-lot/m-p/110545#M3969</link>
      <description>&lt;P&gt;I have embedding model endpoint created and served. It is huggingface model which databricks doesnt provide. I am using this model to create vector search index however this takes a lot of time to get created. I observed that when I use databricks offered embedding(large_bge_en) model it takes only seconds. Any suggestion what could be going wrong in my case?&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2025 03:36:32 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/vector-index-creation-for-external-embedding-model-takes-a-lot/m-p/110545#M3969</guid>
      <dc:creator>rjain</dc:creator>
      <dc:date>2025-02-19T03:36:32Z</dc:date>
    </item>
    <item>
      <title>Re: Vector Index Creation for external embedding model takes a lot of time</title>
      <link>https://community.databricks.com/t5/machine-learning/vector-index-creation-for-external-embedding-model-takes-a-lot/m-p/137947#M4411</link>
      <description>&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;The main reason your Hugging Face embedding model endpoint is taking much longer than Databricks’ own large_bge_en model to build a vector search index is likely due to differences in operational architecture and performance optimizations between external custom endpoints and native Databricks-managed models.&lt;/P&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Key Factors Impacting Index Creation Time&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;API/Network Overhead&lt;/STRONG&gt;: Using an external model (even if Hugging Face-hosted) involves network latency for every embedding call, which adds significant overhead, especially for large-scale batch operations.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Endpoint Scaling and Cold Starts&lt;/STRONG&gt;: If your Hugging Face endpoint is set to scale to zero when idle, cold starts can add minutes to your first requests. Databricks managed models are optimized to avoid such cold start penalties.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Batching and Throughput&lt;/STRONG&gt;: Databricks models are tightly integrated and can leverage optimized hardware accelerators, efficient batching, and parallelization. Hugging Face endpoints may have lower throughput limits, especially on public or lightly provisioned infrastructure.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Embedding Dimension Checks and Data Structure&lt;/STRONG&gt;: Mismatches between the embedding size your model outputs and what the index expects can cause extra validation or conversion work, slowing the indexing pipeline.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Serialization and Format&lt;/STRONG&gt;: If your external endpoint returns embeddings in a different format or requires additional deserialization, this can also introduce latency compared to Databricks’ direct-integration models.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Best Practices and Suggestions&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Precompute Embeddings&lt;/STRONG&gt;: Rather than calling the external endpoint live during indexing, precompute and store embeddings for your dataset, then build the index from this static data (self-managed embeddings). This is the fastest approach and is the method Databricks benchmarks rely on.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Optimize Endpoint Provisioning&lt;/STRONG&gt;: Ensure your Hugging Face endpoint has adequate resources and does not scale to zero. If possible, provision for high concurrency and throughput to reduce latency.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Batch Requests&lt;/STRONG&gt;: If your endpoint supports batching, maximize batch sizes to reduce per-request overhead and make more efficient use of resources.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Monitor and Benchmark&lt;/STRONG&gt;: Regularly profile the performance of both embedding generation and index building. Look for bottlenecks in network, serialization, or dimension mismatches.​&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;&lt;STRONG&gt;Consider Edge Models or Hosting&lt;/STRONG&gt;: When feasible, host the embedding model closer to your data, perhaps within Databricks itself, so you have greater control and minimize network latency.​&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;In summary, the main bottleneck is the extra latency introduced by the external Hugging Face endpoint, which is avoided by Databricks’ optimized, tightly integrated offering. Moving to a precomputed/self-managed embedding workflow and tuning your endpoint can dramatically improve performance.​&lt;/P&gt;</description>
      <pubDate>Thu, 06 Nov 2025 11:53:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/vector-index-creation-for-external-embedding-model-takes-a-lot/m-p/137947#M4411</guid>
      <dc:creator>mark_ott</dc:creator>
      <dc:date>2025-11-06T11:53:39Z</dc:date>
    </item>
  </channel>
</rss>

