I am encountering multiple issues in our Databricks environment and would appreciate guidance or best-practice recommendations for each. Details below:
1. [MaxSparkContextsExceeded] Too many execution contexts are open right now (Limit 150)
Error:
[MaxSparkContextsExceeded] Too many execution contexts are open right now. (Limit set currently to 150) Local : heap memory
Questions:
Common causes of hitting this 150 SparkContext limit?
How to inspect which jobs/notebooks are holding open contexts?
Any cleanup patterns or cluster settings recommended?
2. 20 Concurrent Databricks Notebooks Triggered
We trigger ~20 notebooks at the same time on the same cluster.
Questions:
3. Databricks API 10k Character Limit
We’re hitting a request size restriction (~10,000 characters) when interacting with Databricks API.
Questions:
Request
Looking for:
Explanation of why these happen
How to diagnose root causes
Recommended best practices for preventing them
Any guidance or references to Databricks documentation would be very helpful.