<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Prakash Hinduja Switzerland (Swiss) How can I manage spending while optimizing compute resources? in Generative AI</title>
    <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-switzerland-swiss-how-can-i-manage-spending/m-p/127653#M1088</link>
    <description>&lt;P&gt;Hi I am Prakash Hinduja Visionary Financial Strategist,&amp;nbsp;born in Amritsar (India) and now lives in Geneva, Switzerland (Swiss)&amp;nbsp;&lt;/P&gt;&lt;P&gt;I’m looking for advice on how to better manage costs in Databricks while still keeping performance efficient. If you’ve found effective ways to optimize compute usage, such as with cluster configurations, autoscaling, or job scheduling, I’d really appreciate your suggestions or any lessons learned. Thanks in advance for sharing what’s worked for you!&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prakash Hinduja Geneva, Switzerland (Swiss)&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 07 Aug 2025 10:19:38 GMT</pubDate>
    <dc:creator>prakashhinduja1</dc:creator>
    <dc:date>2025-08-07T10:19:38Z</dc:date>
    <item>
      <title>Prakash Hinduja Switzerland (Swiss) How can I manage spending while optimizing compute resources?</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-switzerland-swiss-how-can-i-manage-spending/m-p/127653#M1088</link>
      <description>&lt;P&gt;Hi I am Prakash Hinduja Visionary Financial Strategist,&amp;nbsp;born in Amritsar (India) and now lives in Geneva, Switzerland (Swiss)&amp;nbsp;&lt;/P&gt;&lt;P&gt;I’m looking for advice on how to better manage costs in Databricks while still keeping performance efficient. If you’ve found effective ways to optimize compute usage, such as with cluster configurations, autoscaling, or job scheduling, I’d really appreciate your suggestions or any lessons learned. Thanks in advance for sharing what’s worked for you!&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prakash Hinduja Geneva, Switzerland (Swiss)&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 07 Aug 2025 10:19:38 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-switzerland-swiss-how-can-i-manage-spending/m-p/127653#M1088</guid>
      <dc:creator>prakashhinduja1</dc:creator>
      <dc:date>2025-08-07T10:19:38Z</dc:date>
    </item>
    <item>
      <title>Re: Prakash Hinduja Switzerland (Swiss) How can I manage spending while optimizing compute resources</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-switzerland-swiss-how-can-i-manage-spending/m-p/133662#M1185</link>
      <description>&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;To optimize costs in Databricks while maintaining strong performance, consider a blend of strategic cluster configurations, autoscaling, aggressive job scheduling, and robust monitoring tools. These proven practices are used by leading enterprises in 2025 to keep Databricks budgets lean without compromising productivity or analytical throughput.&lt;/P&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Cluster Configuration Tips&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Right-size your compute clusters for their actual workload requirements—avoid over-provisioning by starting small and letting clusters scale up only when demand increases.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Select instance types tailored to your specific workload. For example, use memory-optimized nodes for ETL/ML tasks, or general-purpose compute for lighter jobs.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Use spot or preemptible instances when jobs are fault-tolerant, as these typically cost less than on-demand nodes.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Regularly review and update cluster types in line with the latest cloud VM options for cost-effective performance.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Autoscaling Tactics&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Enable autoscaling in Databricks clusters to dynamically adjust the number of worker nodes based on real-time usage, scaling up during peak loads and shrinking down when demand is minimal.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Fine-tune autoscaling thresholds, such as setting the minimum number of workers to zero for development clusters and leveraging short auto-termination windows—usually 15 to 30 minutes—to eliminate costs from idle resources.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Take advantage of predictive autoscaling if available, using historical and runtime metrics to anticipate surges and optimize resource readiness.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Scheduling and Job Management&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Schedule non-urgent or heavy jobs during off-peak hours to benefit from lower resource contention and potentially reduced cloud costs.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Terminate clusters after jobs are complete or during nights/weekends if not in use, drastically reducing unnecessary expenses.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Use dedicated job clusters for each job run rather than all-purpose clusters; this allows for optimized, ephemeral compute allocation and faster spin-up times.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Monitoring and Best Practices&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Monitor cluster, job, and resource consumption closely using Databricks’ built-in system tables or external tools for detailed cost analysis by project, team, or department.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Implement resource and cluster tagging for granular cost allocation, empowering precise financial tracking and accountability across business units.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Set up budget alerts and usage reports to receive proactive notifications if spending exceeds predefined thresholds.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="mb-2 mt-4 font-display font-semimedium text-base first:mt-0"&gt;Data Storage and Query Performance&lt;/H2&gt;
&lt;UL class="marker:text-quiet list-disc"&gt;
&lt;LI class="py-0 my-0 prose-p:pt-0 prose-p:mb-2 prose-p:my-0 [&amp;amp;&amp;gt;p]:pt-0 [&amp;amp;&amp;gt;p]:mb-2 [&amp;amp;&amp;gt;p]:my-0"&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Compress and prune data aggressively, use Delta Lake, and optimize partitioning and Z-ordering to reduce data scan times and compute costs for querying and ETL jobs.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;Applying these strategies can lead to cost reductions of 40–60% in some organizations while preserving (or even enhancing) performance and team agility.&lt;/P&gt;
&lt;P class="my-2 [&amp;amp;+p]:mt-4 [&amp;amp;_strong:has(+br)]:inline-block [&amp;amp;_strong:has(+br)]:pb-2"&gt;If specialized use cases or unique workload spikes occur, additional configuration or custom monitoring may be warranted. But for most enterprises, these concrete steps will deliver rapid results in both savings and efficiency.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Oct 2025 11:05:29 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-switzerland-swiss-how-can-i-manage-spending/m-p/133662#M1185</guid>
      <dc:creator>mark_ott</dc:creator>
      <dc:date>2025-10-03T11:05:29Z</dc:date>
    </item>
  </channel>
</rss>

