cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Liquid Clustering file pruning breaks when filtering on a high NULL numeric column in dataSkipping

aonurdemir
Contributor

Environment

  • Cloud: AWS
  • Compute: Serverless
  • Table: a_big_table
  • Table type: Streaming Table (SDP pipeline)
  • Table size: 641 GB, 6,210 files
  • Liquid Clustering columns: [event_time, integer_userId]
  • delta.dataSkippingStatsColumns:
    event_time, integer_userId, integer_pageView, key_appName, key_appType, key_ipCountry, key_userCountry, key_userCity, key_operatingSystem, key_operatingSystemVersion, key_userLanguage
  • OPTIMIZE FULL was run (with clustering) before reproducing

Summary

Adding any filter on integer_pageView (BIGINT, listed in dataSkippingStatsColumns, ~41% NULL) causes file pruning to degrade from 303 โ†’ 2,587 files โ€” an 8.5x increase. This happens even with IS NOT NULL.
The same filter pattern on an identical BIGINT column not in dataSkippingStatsColumns returns 303 files correctly.


Reproduction

base = (
    spark.read.table("a_big_table")
    .filter(
        (col("event_time") >= lit("2026-04-01")) &
        (col("event_time") < lit("2026-04-04")) &
        (col("integer_userId").isNotNull()) &
        (col("key_appType").isin('ios', 'android', 'smart', 'web'))
    )
)

# All return 303 files (correct)
len(base.inputFiles())                                                  # 303
len(base.filter(col('key_operatingSystem') == 'android').inputFiles())   # 303
len(base.filter(col('key_ipCountry') == 'US').inputFiles())              # 303
len(base.filter(col('key_ipCountry').isNotNull()).inputFiles())          # 303
len(base.filter(col('integer_earnPoints') > 0).inputFiles())            # 303  โ† BIGINT, NOT in stats
len(base.filter(col('integer_earnPoints').isNotNull()).inputFiles())     # 303  โ† BIGINT, NOT in stats

# All return 2,587 files (BUG or something that I don't know)
len(base.filter(col('integer_pageView') > 0).inputFiles())              # 2587 โ† BIGINT, IN stats
len(base.filter(col('integer_pageView').isNotNull()).inputFiles())       # 2587 โ† BIGINT, IN stats

Key observations

Filter Files read Column in dataSkippingStatsColumns? Column type

Base (no extra filter)303โ€”โ€”
integer_pageView > 02,587YESBIGINT
integer_pageView IS NOT NULL2,587YESBIGINT
integer_earnPoints > 0303NOBIGINT
integer_earnPoints IS NOT NULL303NOBIGINT
key_operatingSystem = 'android'303YESSTRING
key_ipCountry IS NOT NULL303YESSTRING
key_ipCountry = 'US'303YESSTRING

Column characteristics

  • integer_pageView: BIGINT, 41% NULL, min=1 when non-null (never 0), at schema position 66
  • integer_earnPoints: BIGINT, similar NULL pattern, NOT in dataSkippingStatsColumns โ€” works correctly
  • key_operatingSystem: STRING, 1.3% NULL, in dataSkippingStatsColumns โ€” works correctly
  • All other stats columns are STRING type with low NULL rates โ€” all work correctly

Expected behavior

Adding integer_pageView > 0 (or IS NOT NULL) should return โ‰ค 303 files since the filter is AND-combined with event_time range. Liquid Clustering domain pruning on event_time should remain effective regardless of any additional predicate.

Actual behavior

Any predicate referencing integer_pageView causes file pruning to degrade to 2,587 files (~all OPTIMIZE'd files), suggesting that Liquid Clustering domain pruning on event_time is bypassed when Delta evaluates stats for this column.

Hypothesis

integer_pageView is the only non-clustering numeric column with a high NULL rate (41%) in dataSkippingStatsColumns. Many files have all-NULL values โ†’ NULL min/max in file stats. When Delta's data skipping evaluates any predicate against these NULL stats, it appears to corrupt the overall pruning pipeline, disabling Liquid Clustering domain pruning for the entire query.

Workaround

Remove integer_pageView from delta.dataSkippingStatsColumns. The filter then becomes a post-scan row filter and domain pruning operates normally (303 files).

2 REPLIES 2

amirabedhiafi
New Contributor III

Hello !

Thank you for sharing the workaround.

I already had a similar issue and I resolved it by adding has_pageView BOOLEAN GENERATED ALWAYS AS (integer_pageView IS NOT NULL) or pageView_positive BOOLEAN GENERATED ALWAYS AS (integer_pageView > 0) and included that helper in stats and queried on it. 

If this answer resolves your question, could you please mark it as โ€œAccept as Solutionโ€? It will help other users quickly find the correct fix.

Senior BI/Data Engineer | Microsoft MVP Data Platform | Microsoft MVP Power BI | Power BI Super User | C# Corner MVP

Louis_Frolio
Databricks Employee
Databricks Employee

Hello @aonurdemir , I looked into your query and have compiled some helpful tips:

I don't have direct access to your workspace internals, so I can't prove this definitively. But what you're seeing is consistent with how Delta's stats-based data skipping can get confused by a high-NULL numeric column.

A few mechanics worth keeping in mind:

  • Data skipping is per-column, per-file. For each stats column, Delta keeps file-level min/max plus a few additional stats (like null count). When the planner sees a predicate on a column listed in delta.dataSkippingStatsColumns, it uses those stats to decide which files might contain matching rows.
  • When a stats column is very sparse (lots of NULLs), its stats are nearly non-discriminative. If many files have integer_pageView = NULL for every row, their file-level stats look like "all NULLs." For predicates like integer_pageView > 0 or IS NOT NULL, the engine can't safely prove those files have no matches, so it keeps them.
  • The subtle part is the interaction between columns. In your base query (without integer_pageView), Delta combines event_time and the other selective stats columns and prunes down to 303 files. The moment you reference integer_pageView (a stats column with poor selectivity and many NULL-only files), the planner appears to fall back to a less effective skipping strategy and keeps 2,587 files. That's why the same pattern on integer_earnPoints (not in stats) still returns 303 files. That predicate is applied row-level, after the good file pruning has already happened.

I agree with your core conclusion:

high-NULL numeric column + being in delta.dataSkippingStatsColumns can degrade overall file pruning, even when combined with a highly selective Liquid-clustered column like event_time.

Whether we call that a bug or a limitation of the current skipping algorithm is mostly semantics. From a user's point of view, it's surprising behavior, and it deserves attention.

Why the workaround works

Your workaround:

ALTER TABLE a_big_table
  SET TBLPROPERTIES (
    'delta.dataSkippingStatsColumns' = 'event_time,integer_userId,key_appName,...(minus integer_pageView)'
  );

forces integer_pageView back into being a row-level filter only. So:

  1. Delta uses good stats on event_time and the other selective columns to prune down to roughly 303 files.
  2. integer_pageView gets evaluated inside those files without disturbing the skipping plan.

That's exactly what you want.

Alternative pattern (credit to @amirabedhiafi)

The generated-column approach @amirabedhiafi suggested is a clean compromise:

ALTER TABLE a_big_table
ADD COLUMN has_pageView BOOLEAN
  GENERATED ALWAYS AS (integer_pageView IS NOT NULL);

-- Or, if you only care about > 0:
ALTER TABLE a_big_table
ADD COLUMN pageView_positive BOOLEAN
  GENERATED ALWAYS AS (integer_pageView > 0);

Then:

  • Add has_pageView or pageView_positive to delta.dataSkippingStatsColumns.
  • Filter on the helper column instead of the raw BIGINT:
len(base.filter(col("pageView_positive") == True).inputFiles())

The helper column is boolean and doesn't suffer from the same "all values NULL" pattern. Its file-level stats are much more selective and play nicely with Liquid clustering and the other stats columns.

What to do next

You have a very clean reproduction and clear metrics (303 versus 2,587 files), so you're well positioned to get this in front of engineering. Do these in order:

  1. Open a Databricks Support ticket (or go through your account team) and attach:
    • The exact query shape you posted.
    • Output of DESCRIBE DETAIL a_big_table;
    • Output of SHOW TBLPROPERTIES a_big_table; (especially delta.dataSkippingStatsColumns).
    • Output of OPTIMIZE a_big_table ZORDER BY (event_time, integer_userId) or your Liquid Clustering config.
    • A comparison of EXPLAIN plans with and without the integer_pageView predicate. Look for changes in DataSkippingFileIndex and the resulting file counts.
  2. In the meantime, keep using one of:
    • Your current workaround (drop integer_pageView from stats), or
    • The generated-column pattern above.

Takeaway

I don't know exactly which internal heuristic is causing the planner to downgrade pruning the moment integer_pageView appears. Your hypothesis (that high-NULL numeric stats interfere with multi-column skipping) lines up with how a stats-driven index would behave: once a low-quality stats column gets pulled into the decision, it can drag down the overall pruning effectiveness.

You've done a great job isolating this. It deserves to be escalated so engineering can either improve the behavior or document the limitation.

Regards, Louis.