cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Handling Over-Usage of Capacity in Databricks Jobs/Processes

smanda88
New Contributor

Hi all,

Is there a tool or method in Databricks to ensure data integrity and stability when a job or process exceeds the allocated capacity? Specifically, I’m looking for ways to:

  1. Prevent failures or data loss due to resource overuse.
  2. Automatically scale or manage workloads efficiently.
  3. Get alerts or take preventive actions before hitting capacity limits.
1 REPLY 1

Alberto_Umana
Databricks Employee
Databricks Employee

Hello @smanda88 -

For point 1, please see; https://docs.databricks.com/en/lakehouse-architecture/reliability/best-practices.html

For 2, you can use auto-scaling, please refer to: https://docs.databricks.com/en/lakehouse-architecture/cost-optimization/best-practices.html#2-dynami...

For 3, you can set up job notification.

Alberto_Umana_0-1739805977569.png

 

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now