cancel
Showing results for 
Search instead for 
Did you mean: 
Data Governance
cancel
Showing results for 
Search instead for 
Did you mean: 

Fine grained control of volumes

ossinova
Contributor II

Is it possible to provide fine grained control (folder level/file level) for a given volume?

I have two SCIM integrated groups who have read volume access at the catalog level, but those two groups need different permissions on a lower level. Preferably at a folder or file level within the volume. 

Volume:

(Top level)

  • landing/

(Inner level)

    • landing/PDF (group 1 needs access only)
    • landing/CSV (group 2 needs access only)

Is it possible to achieve this without having to mount the inner levels as a top level or creating X schemas? Any recommendations are highly appreciated. 

 

3 REPLIES 3

Kaniz
Community Manager
Community Manager

Hi @ossinovaFine-grained access control is essential when you need more nuanced permissions beyond the traditional role-based access control (RBAC).

Let’s explore how you can achieve this within Azure data lake Storage Gen2.

  1. Role-Based Access Control (RBAC):

    • RBAC defines specific user roles and assigns permissions to each role. However, it might become complex when dealing with many roles and expanding data sources.
    • RBAC provides coarse-grained access, such as read or write access to all data in a storage account or container.
    • For example, you can assign roles like:
      • Storage Blob Data Owner: Full access to Blob storage containers and data.
      • Storage Blob Data Contributor: Read, write, and delete access to Blob storage containers and blobs.
      • Storage Blob Data Reader: Read and list Blob storage containers and blobs.
  2. Attribute-Based Access Control (ABAC):

    • ABAC builds on RBAC by adding conditions based on attributes.
    • You can refine RBAC role assignments by considering specific attributes (e.g., tags) in the context of actions.
    • For instance, you can grant read or write access to data objects with a specific tag.
  3. Access Control Lists (ACLs):

    • ACLs provide fine-grained access control at the folder and file level.
    • You can assign permissions to specific directories and files.
    • For example, you can grant write access to a specific directory or file.
    • ACLs are particularly useful when you need to manage permissions for different groups within the same storage space.

Best Practices:

  • Assign RBAC Reader role at the Storage Account/Container level to your security principals.
  • Use more restrictive ACLs at the file and folder level to achieve fine-grained control.

By combining these mechanisms, you can achieve the desired access control without mounting inner levels as top-level entities or creating additional schemas. Remember that ACLs allow you to apply finer-grain permissions, ensuring that different groups have appropriate access within the same volume. 🚀🔒

For more detailed implementation steps, refer to the official documentation on Azure Data Lake Storage Gen2 access control.1

Sidhant07
New Contributor III
New Contributor III

 

Yes, it is possible to provide fine-grained control at the folder or file level within a volume in Databricks Unity Catalog. You can achieve this by creating managed or external volumes in the Unity Catalog and granting specific groups or users access to the desired directories or files within the volume. With managed volumes, you can create governed storage for working with files without the need for configuring access to cloud storage, while external volumes allow you to add governance to existing cloud object storage directories.To create a managed volume, you can use the CREATE VOLUME command in SQL or the Catalog Explorer UI. For example:

 

CREATE VOLUME <catalog>.<schema>.<volume-name>;
 
To create an external volume, you can specify the location within an external location using the CREATE EXTERNAL VOLUME command in SQL or the Catalog Explorer UI. For example:

 

CREATE EXTERNAL VOLUME <catalog>.<schema>.<external-volume-name> LOCATION 's3://<external-location-bucket-path>/<directory>';
Once the volumes are created, you can grant permissions to specific groups or users using the GRANT command in SQL. For example:

 

GRANT READ VOLUME, WRITE VOLUME ON VOLUME <volume-name> TO <group-name>;

You can then access and work with the files in the the volume using SQL, %fs magic command, Databricks utilities, or other libraries. The path to access files in volumes follows the format: /Volumes/<catalog>/<schema>/<volume>/<path>/<file-name> or dbfs:/Volumes/<catalog>/<schema>/<volume>/<path>/<file-name>.

 

https://docs.databricks.com/data-governance/unity-catalog/best-practices.htmlhttps://docs.databricks.com/connect/unity-catalog/volumes.html

https://docs.databricks.com/discover/files.htmlhttps://databricks.com/blog/announcing-public-preview-volumes-databricks-unity-catalog)

rkalluri-apex
New Contributor III

Can you define the external location at the Landing level and create two Volumes one for PDF and other for CSV and provide access to the respective groups 1 and 2.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.