cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

inegstion time clustering

mderela
New Contributor II

Hello, in rerence to https://www.databricks.com/blog/2022/11/18/introducing-ingestion-time-clustering-dbr-112.html

I have a silly question how to use it. So let's assume that I have a few TB of not partitioned data. So, if I would like to query on data that has been ingested starting from yesterday, what I should do?

select * from mytable where WHAT_SHOULD_BE_HERE >= current_timestamp() - INTERVAL 1 day

 In other words - what I need to query on to make sure that only small part of "files" will be "scaned" instead of whole dataset. It is clear for me how to achive that with using partitions but how with ingestion time clustering?

 

 

3 REPLIES 3

mderela
New Contributor II

thank you @Retired_mod 

Could you please put a little bit more light on configuration ? So, for instance - I am performing ingestion with using DLT. Should I add extra parameters (like pipelines.autoOptimize.zOrderCols) or it should be done in other way?

Hi @Retired_mod  refereing this "Remember that this will only work if you have set up ingestion time clustering for your table".

Can you please elobrate how can we setup "ingestion time clustering" for existing non-partitioned tables?

@Retired_mod Thank for sharing this, it is really helpful, but my question remains the same that how can we know Ingestion Time Clustering is enabled?  As per doc it is enabled by default with DBR 11.2 & above.

- Does Ingestion Time Clustering and Liquid clustering are similar? 

-  What about the existing non-partitioned tables? Can I enable liquid clustering on those if I upgrade my interactive clusters to use Databricks 13.2 or above?

My scenario is I have some delta non-partitioned tables around 200 to 300 GB of data in each table. and ETL requirement is to get max timestamp, so what I do is select max(timestamp) from table every 5 minutes on those tables separately in different jobs and then further utilize thse max_timestamp in their ETL pipelines.

max_timestamp query is taking around more than 2.5 minutes to fetch the max_timestamp from those tables. Upon check the Spark UI and DAG I found out this query is reading all the files behind the table and not pruning any file that is why it's taking that much time only to fetch max(timestamp).

What should I do to get that max(timestamp) in lesser time (less than 10 secs) without partitioning the table as it recommended by Databricks to only partition tables if we have table size greater than 1 TB.

Thanks 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group