cancel
Showing results for 
Search instead for 
Did you mean: 
Technical Blog
Explore in-depth articles, tutorials, and insights on data analytics and machine learning in the Databricks Technical Blog. Stay updated on industry trends, best practices, and advanced techniques.
cancel
Showing results for 
Search instead for 
Did you mean: 
epandya
Databricks Employee
Databricks Employee


This is the first installment in a multi-part blog series on governing Databricks Apps as a platform admin. In this series, we cover everything from architecture and access control to cost management, networking, monitoring, and operational best practices.
In this first part, we focus on the execution model and the permission and authorization framework you need to understand before rolling out Apps in your organization. Stay tuned for the upcoming parts, where we will dive into resource configuration, cost governance, networking and security, and more.
- Peyman Nasirifard, Senior Solutions Architect, Databricks

Introduction

Databricks Apps lets teams build and deploy full-stack web applications (dashboards, internal tools, AI-powered interfaces) directly inside a Databricks workspace. For developers, that's exciting. For platform admins, it raises immediate questions: Who can deploy these? What can they access? How much will it cost? How do I keep it secure?

This post, and the series that follows, answers those questions. Whether you're rolling out Apps for the first time or tightening governance on an existing deployment, we'll walk you through the architecture, access controls, cost management, networking, and operational practices you need.

How Databricks Apps Work (The Admin View)

Before diving into governance, it helps to understand the execution model.

Apps run on Databricks serverless compute, so there are no clusters to configure or manage. You deploy source code (Python or Node.js), Databricks builds the dependencies, and the app gets a unique URL on the databricksapps.com domain. Several supported frameworks include Streamlit, Dash, Gradio, Flask, and React.

Source code can live in a workspace folder or a Git repository (such as GitHub, GitLab, Bitbucket). Each deployment currently takes a snapshot. Auto-sync might be supported in the future.

From a billing perspective, apps have four states:

 

State

Accessible?

Billed?

Running

Yes

Yes

Stopped

No

No

Deploying

No

No

Crashed

No

No

The key takeaway: you only pay while an app is Running. Stopping idle apps is your primary cost lever.

Access Control: Who Can Do What

Databricks Apps uses a straightforward two-level permission model at the workspace level:

 

Permission

What It Allows

CAN MANAGE

Edit, delete, configure settings, assign/revoke permissions

CAN USE

Run and interact with the app only

Permissions can be assigned to individual users, groups, or service principals. You can also grant CAN USE to all account users for broad internal access. There is no public or anonymous access. Every user must authenticate through your workspace's SSO.

The Dual Authorization Model

This is the most important architectural concept for admins to understand. Apps support two authorization modes that can be used independently or combined (recommended):

App Authorization (Machine-to-Machine)

Every app automatically gets a dedicated service principal when created. Databricks injects DATABRICKS_CLIENT_ID and DATABRICKS_CLIENT_SECRET into the app's runtime environment. The service principal's permissions determine what the app can do on its own: background jobs, writing logs, accessing shared resources. All users share the same service principal privileges; there is no per-user differentiation in this mode.

epandya_0-1774545842342.png

 

User Authorization (User-to-Machine), Public Preview (as of March 25, 2026)

The app acts with the calling user's identity. Unity Catalog policies, including row-level filters, column masks, and table access control lists (ACLs), are enforced automatically. User tokens arrive via the x-forwarded-access-token HTTP header.

epandya_1-1774545842344.png

 

Apps must declare authorization scopes to limit API access, such as:

  • sql: SQL warehouse querying
  • dashboards.genie: Genie space management
  • files.files: File/directory management

If you don't select any scopes, Databricks assigns a default set that allows the app to retrieve basic user identity information:

  • iam.access-control:read
  • iam.current-user:read

These defaults are required to support user authorization functionality, but they don’t permit access to data or compute resources. Add additional scopes when you create or edit the app.

Databricks blocks access outside approved scopes, preventing privilege escalation. This is a critical security boundary.

Combined Mode

The recommended pattern for production apps: use app authorization for shared operations (such as for logging and configuration) and user authorization for per-user data access. This gives you auditability at the user level while keeping shared operations running smoothly. Here are more detailed guidelines:

Use app authorization for shared operations such as: 

  • Writing logs or metrics to a shared table or volume.
  • Managing shared configuration or feature flags.
  • Calling external services with app-level credentials.
  • Background jobs or maintenance tasks not tied to a single user.

Benefits:

  • Stability & reliability: background tasks don’t break if a specific user loses access or leaves.
  • Simple permissions: you only grant the service principal the minimum rights on shared resources.
  • Less traffic in audit logs for noisy operations that don’t need per-user attribution.

Use user authorization for per-user data access such as: 

  • Querying Unity Catalog tables/volumes where access varies by user.
  • Using SQL warehouses, clusters, model endpoints, Genie spaces where results depend on the user’s entitlements.
  • Any UI action where “who did what to which data” matters for governance.

Benefits:

  • True per-user governance: UC policies (row filters, column masks, ACLs) apply automatically. 
  • Least privilege: scopes plus UC permissions prevent the app from overreaching, even if misconfigured. 
  • User-level auditability: audit logs and downstream telemetry can attribute actions to the end user, not just the app SP.

Admin Recommendation

Here are a couple of recommendations on app permissions:

  • Grant CAN MANAGE only to senior developers or app owners.
  • Grant CAN USE to end users and consumers.
  • Always prefer user authorization for data access so Unity Catalog governance applies.
  • Request the minimum necessary authorization scopes; e.g., don't grant sql if the app only reads files.

What We Covered

In this first installment, we explored the execution model behind Databricks Apps and the permission and authorization framework that governs them. The key takeaways:

  • Apps run on serverless compute and you only pay while an app is Running.
  • The two-level permission model (CAN MANAGE and CAN USE) controls who can deploy and who can consume.
  • The dual authorization model is the foundation of secure, auditable app governance. Use app authorization for shared operations and user authorization for per-user data access. Combining both modes lets you enforce Unity Catalog policies at the user level while keeping background operations stable.

With this foundation in place, you are ready to move on to the practical side of running Apps at scale.

What’s Next

In the next post (Part 2), we’ll shift from architecture to operations. We’ll cover how apps declare and connect to workspace resources (SQL warehouses, model serving endpoints, secrets, and more), how to structure your app.yaml for secure configuration, and how to track, control, and govern app costs using billing system tables and SQL alerts. Stay tuned.

And in Part 3, we’ll tackle networking and security (ingress controls, egress policies, private link), monitoring and observability, commands for day-to-day administration, platform limits to keep in mind, and a complete admin rollout checklist to tie it all together.