You can increase compute resources for your Streamlit Databricks app, but this requires explicitly configuring the compute size in the Databricks app management UI or via deployment configurationโenvironment variables like DATABRICKS_CLUSTER_ID alone do not change resource limits for your app.โ
Adjusting Compute Size
Databricks apps have default resource limits of 2 vCPUs and 6 GB of memory, but you can select higher compute sizes for more demanding workloads. To increase these limits, follow these steps:
-
When creating or editing your app in Databricks, go to the Compute section, select your app, and choose Edit.
-
In the Configure step, select a larger Compute size from the provided dropdown, such as one offering up to 4 vCPUs and 12 GB of memory.โ
After saving your changes, your app will gradually switch to the newly selected compute size once the update completes. The active compute size can also be viewed on your appโs Overview tab.โ
Note on DATABRICKS_CLUSTER_ID
Setting the DATABRICKS_CLUSTER_ID helps your app identify and connect to specific clusters for running jobs or accessing data, but it does not alter the compute resources allocated to Databricks apps themselves. The allocated resources are governed by the compute size you select during app setup or editโnot by environment variables.โ
Related Guidance
-
If you need more resources than the available compute sizes, consider using external approaches such as breaking workloads into distributed jobs or moving portions of your workload to Databricks notebooks or jobs where cluster sizes are more flexible.โ
-
For persistent performance issues, review your appโs code for memory leaks or inefficient data processing, as resource limits can also be hit due to suboptimal application design.โ
In summary, you should increase your Databricks app compute resources by editing the appโs configuration and selecting a higher compute sizeโenvironment variables alone will not affect these limits.โ