How to do bucketing in Databricks?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2022 10:06 PM
We are migrating a job from onprem to databricks. We are trying to optimize the jobs but couldn't use bucketing because by default databricks stores all tables as delta table and it shows error that bucketing is not supported for delta. Is there anyway to do this?
- Labels:
-
Bucketing
-
Delta table
-
Optimisation
-
Optimization
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2022 11:33 PM
Hi @Arun Balaji , you can go through https://www.databricks.com/session/bucketing-in-spark-sql-2-3 and also https://www.databricks.com/session_na20/bucketing-2-0-improve-spark-sql-performance-by-removing-shuf... and please let us know if this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2022 12:40 AM
Hi @Debayan Mukherjee We are following similar syntax for creating the bucketed table. But we are getting the following error,
Operation not allowed: `Bucketing` is not supported for Delta tables
Databricks by default considering the created tables as delta
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2022 04:28 AM
Hi @Arun Balaji ,
bucketing is not supported for the delta tables as you have noticed.
For the optimization and best practices with delta tables check this:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2022 04:33 AM
Is it possible to create a table without making it a delta table?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2022 04:36 AM
good question 😉 I was going to mention this.
You can still use external tables. Those tables will be stored outside the main (root) metastore bucket/container.
https://docs.databricks.com/data-governance/unity-catalog/create-tables.html
then you can use parquet tables.
It seems like the way forward is to use managed tables with Unity Catalog, they are gaining some performance improvements over time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2022 04:48 AM
you can also check this:
https://community.databricks.com/s/question/0D53f00001m1u4qCAA/bucketing-on-delta-tables

