Handling Over-Usage of Capacity in Databricks Jobs/Processes
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā02-17-2025 07:02 AM
Hi all,
Is there a tool or method in Databricks to ensure data integrity and stability when a job or process exceeds the allocated capacity? Specifically, Iām looking for ways to:
- Prevent failures or data loss due to resource overuse.
- Automatically scale or manage workloads efficiently.
- Get alerts or take preventive actions before hitting capacity limits.
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā02-17-2025 07:26 AM
Hello @smanda88 -
For point 1, please see; https://docs.databricks.com/en/lakehouse-architecture/reliability/best-practices.html
For 2, you can use auto-scaling, please refer to: https://docs.databricks.com/en/lakehouse-architecture/cost-optimization/best-practices.html#2-dynami...
For 3, you can set up job notification.

