cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Streamlit Databricks App Compute Scaling

OmarE
New Contributor II

I have a streamlit Databricks app and Iโ€™m looking to increase the compute resources. According to the documentation and the current settings, the app is limited to 2 vCPUs and 6 GB of memory. Is there a way to adjust these limits or add more resources? I have already added the DATABRICKS_CLUSTER_ID to my environment variables, but it doesnโ€™t seem to affect the compute resources.

1 REPLY 1

mark_ott
Databricks Employee
Databricks Employee

You can increase compute resources for your Streamlit Databricks app, but this requires explicitly configuring the compute size in the Databricks app management UI or via deployment configurationโ€”environment variables like DATABRICKS_CLUSTER_ID alone do not change resource limits for your app.โ€‹

Adjusting Compute Size

Databricks apps have default resource limits of 2 vCPUs and 6 GB of memory, but you can select higher compute sizes for more demanding workloads. To increase these limits, follow these steps:

  • When creating or editing your app in Databricks, go to the Compute section, select your app, and choose Edit.

  • In the Configure step, select a larger Compute size from the provided dropdown, such as one offering up to 4 vCPUs and 12 GB of memory.โ€‹

After saving your changes, your app will gradually switch to the newly selected compute size once the update completes. The active compute size can also be viewed on your appโ€™s Overview tab.โ€‹

Note on DATABRICKS_CLUSTER_ID

Setting the DATABRICKS_CLUSTER_ID helps your app identify and connect to specific clusters for running jobs or accessing data, but it does not alter the compute resources allocated to Databricks apps themselves. The allocated resources are governed by the compute size you select during app setup or editโ€”not by environment variables.โ€‹

Related Guidance

  • If you need more resources than the available compute sizes, consider using external approaches such as breaking workloads into distributed jobs or moving portions of your workload to Databricks notebooks or jobs where cluster sizes are more flexible.โ€‹

  • For persistent performance issues, review your appโ€™s code for memory leaks or inefficient data processing, as resource limits can also be hit due to suboptimal application design.โ€‹

In summary, you should increase your Databricks app compute resources by editing the appโ€™s configuration and selecting a higher compute sizeโ€”environment variables alone will not affect these limits.โ€‹