This is the first installment in a multi-part blog series on governing Databricks Apps as a platform admin. In this series, we cover everything from architecture and access control to cost management, networking, monitoring, and operational best practices. In this first part, we focus on the execution model and the permission and authorization framework you need to understand before rolling out Apps in your organization. Stay tuned for the upcoming parts, where we will dive into resource configuration, cost governance, networking and security, and more.
- Peyman Nasirifard, Senior Solutions Architect, Databricks
Databricks Apps lets teams build and deploy full-stack web applications (dashboards, internal tools, AI-powered interfaces) directly inside a Databricks workspace. For developers, that's exciting. For platform admins, it raises immediate questions: Who can deploy these? What can they access? How much will it cost? How do I keep it secure?
This post, and the series that follows, answers those questions. Whether you're rolling out Apps for the first time or tightening governance on an existing deployment, we'll walk you through the architecture, access controls, cost management, networking, and operational practices you need.
Before diving into governance, it helps to understand the execution model.
Apps run on Databricks serverless compute, so there are no clusters to configure or manage. You deploy source code (Python or Node.js), Databricks builds the dependencies, and the app gets a unique URL on the databricksapps.com domain. Several supported frameworks include Streamlit, Dash, Gradio, Flask, and React.
Source code can live in a workspace folder or a Git repository (such as GitHub, GitLab, Bitbucket). Each deployment currently takes a snapshot. Auto-sync might be supported in the future.
From a billing perspective, apps have four states:
|
State |
Accessible? |
Billed? |
|
Running |
Yes |
Yes |
|
Stopped |
No |
No |
|
Deploying |
No |
No |
|
Crashed |
No |
No |
The key takeaway: you only pay while an app is Running. Stopping idle apps is your primary cost lever.
Databricks Apps uses a straightforward two-level permission model at the workspace level:
|
Permission |
What It Allows |
|
CAN MANAGE |
Edit, delete, configure settings, assign/revoke permissions |
|
CAN USE |
Run and interact with the app only |
Permissions can be assigned to individual users, groups, or service principals. You can also grant CAN USE to all account users for broad internal access. There is no public or anonymous access. Every user must authenticate through your workspace's SSO.
This is the most important architectural concept for admins to understand. Apps support two authorization modes that can be used independently or combined (recommended):
App Authorization (Machine-to-Machine)
Every app automatically gets a dedicated service principal when created. Databricks injects DATABRICKS_CLIENT_ID and DATABRICKS_CLIENT_SECRET into the app's runtime environment. The service principal's permissions determine what the app can do on its own: background jobs, writing logs, accessing shared resources. All users share the same service principal privileges; there is no per-user differentiation in this mode.
User Authorization (User-to-Machine), Public Preview (as of March 25, 2026)
The app acts with the calling user's identity. Unity Catalog policies, including row-level filters, column masks, and table access control lists (ACLs), are enforced automatically. User tokens arrive via the x-forwarded-access-token HTTP header.
Apps must declare authorization scopes to limit API access, such as:
If you don't select any scopes, Databricks assigns a default set that allows the app to retrieve basic user identity information:
These defaults are required to support user authorization functionality, but they don’t permit access to data or compute resources. Add additional scopes when you create or edit the app.
Databricks blocks access outside approved scopes, preventing privilege escalation. This is a critical security boundary.
Combined Mode
The recommended pattern for production apps: use app authorization for shared operations (such as for logging and configuration) and user authorization for per-user data access. This gives you auditability at the user level while keeping shared operations running smoothly. Here are more detailed guidelines:
Use app authorization for shared operations such as:
Benefits:
Use user authorization for per-user data access such as:
Benefits:
Here are a couple of recommendations on app permissions:
In this first installment, we explored the execution model behind Databricks Apps and the permission and authorization framework that governs them. The key takeaways:
With this foundation in place, you are ready to move on to the practical side of running Apps at scale.
In the next post (Part 2), we’ll shift from architecture to operations. We’ll cover how apps declare and connect to workspace resources (SQL warehouses, model serving endpoints, secrets, and more), how to structure your app.yaml for secure configuration, and how to track, control, and govern app costs using billing system tables and SQL alerts. Stay tuned.
And in Part 3, we’ll tackle networking and security (ingress controls, egress policies, private link), monitoring and observability, commands for day-to-day administration, platform limits to keep in mind, and a complete admin rollout checklist to tie it all together.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.