cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

DEEP CLONE fails with [UNRESOLVED_ROUTINE] Cannot resolve routine isNotNull on DBR 16.4

dplatform_user
New Contributor II
Hi Databricks Community,
 
I'm encountering an issue when attempting to DEEP CLONE a Delta table on DBR 16.4 that works fine on DBR 13.3.
Error Message:
 
[UNRESOLVED_ROUTINE] Cannot resolve routine `isNotNull` on search path [`system`.`builtin`, `system`.`session`, 'mycatalog.default'].
 
 
When it occurs:
• Running CREATE TABLE ... DEEP CLONE ... on a Delta table with writer version 6
• Table has NOT NULL constraints on all columns and an IDENTITY column
• Table features include: checkConstraints, generatedColumns, identityColumns
What works:
• Same DEEP CLONE works on DBR 13.3
• Other tables without these features clone successfully
Questions:
- Is isNotNull a valid SQL routine in DBR 16.4? Spark seems to resolve it during clone metadata validation.
- Why does this work on DBR 13.3 but fail on DBR 16.4? Is there a change in how Delta validates constraint metadata?
- What's the recommended workaround? Should I create a UC function isNotNull, or drop/recreate constraints using standard SQL?
Context:
• DBR 16.4.x-scala2.12, Delta Lake 3.3.1, Unity Catalog
• Error occurs during DEEP CLONE metadata validation/copy phase
Any insights would be appreciated!

 

1 ACCEPTED SOLUTION

Accepted Solutions

Louis_Frolio
Databricks Employee
Databricks Employee

Greetings @dplatform_user , I did some digging and found a few helpful hints/tips for you to consider. 

What's happening

You're hitting UNRESOLVED_ROUTINE: Cannot resolve routine isNotNull on DBR 16.4 during a DEEP CLONE. Same clone works on 13.3. Simpler tables are fine.

This is a known 16.x bug — not a missing function, not a UC permissions issue. On 16.x, Spark sessions can have their in-memory function registry cleared and then get reused. When Delta's internal clone path tries to invoke built-ins like isNotNull (for stats collection, constraint validation, etc.), the planner can't find them. Fixes are in 17.2+ with backports planned for 16.4 via a feature flag.

Rule out the simple stuff first

Run these on the same 16.4 cluster in a fresh notebook:

SELECT isnotnull(1);
SHOW FUNCTIONS LIKE 'isNotNull';

If isnotnull(1) works and the only result from SHOW FUNCTIONS is a system.builtin entry, you're in the known bug bucket.

Workarounds

  1. Run the clone from a fresh job cluster, not a long-lived all-purpose cluster that's been running other jobs or retries.

  2. Test on 17.2+ if available. If it works there and not on 16.4, that confirms the bug and strengthens your case for an ES ticket.

  3. If you need the copy now and don't need incremental refresh, use CTAS and manually add back constraints:

CREATE TABLE target AS SELECT * FROM source;

 

Hope this helps, Louis

View solution in original post

2 REPLIES 2

Louis_Frolio
Databricks Employee
Databricks Employee

Greetings @dplatform_user , I did some digging and found a few helpful hints/tips for you to consider. 

What's happening

You're hitting UNRESOLVED_ROUTINE: Cannot resolve routine isNotNull on DBR 16.4 during a DEEP CLONE. Same clone works on 13.3. Simpler tables are fine.

This is a known 16.x bug — not a missing function, not a UC permissions issue. On 16.x, Spark sessions can have their in-memory function registry cleared and then get reused. When Delta's internal clone path tries to invoke built-ins like isNotNull (for stats collection, constraint validation, etc.), the planner can't find them. Fixes are in 17.2+ with backports planned for 16.4 via a feature flag.

Rule out the simple stuff first

Run these on the same 16.4 cluster in a fresh notebook:

SELECT isnotnull(1);
SHOW FUNCTIONS LIKE 'isNotNull';

If isnotnull(1) works and the only result from SHOW FUNCTIONS is a system.builtin entry, you're in the known bug bucket.

Workarounds

  1. Run the clone from a fresh job cluster, not a long-lived all-purpose cluster that's been running other jobs or retries.

  2. Test on 17.2+ if available. If it works there and not on 16.4, that confirms the bug and strengthens your case for an ES ticket.

  3. If you need the copy now and don't need incremental refresh, use CTAS and manually add back constraints:

CREATE TABLE target AS SELECT * FROM source;

 

Hope this helps, Louis

SteveOstrowski
Databricks Employee
Databricks Employee

Hi @dplatform_user,

This error occurs because of how NOT NULL constraints are internally represented in Delta table metadata. When a Delta table has NOT NULL columns, the Delta protocol stores these as CHECK constraints using expressions like isNotNull(column_name) in the transaction log (under the delta.constraints.* table properties). On DBR 13.3, the Spark SQL analyzer recognized isNotNull as a valid internal expression during the metadata copy phase of DEEP CLONE. In DBR 16.4 (which uses a newer Spark version), the SQL function resolution path has changed, and isNotNull is no longer recognized as a resolvable routine in the standard search path, which is exactly what the error message tells you: it cannot find isNotNull in system.builtin or system.session.

This specifically affects tables that have NOT NULL constraints combined with features like IDENTITY columns, generated columns, or explicit CHECK constraints, because those features elevate the writer protocol version (writer version 6 in your case) and cause the constraint metadata to be validated more strictly during clone operations.

WORKAROUND OPTIONS

1. Drop and re-add the CHECK constraints before cloning

If your source table has explicit CHECK constraints (beyond the implicit NOT NULL ones), you can temporarily drop them, perform the clone, and then re-add them on the target table:

-- List current constraints
SHOW TBLPROPERTIES source_table;

-- Look for properties starting with delta.constraints.*
-- Drop any explicit CHECK constraints
ALTER TABLE source_table DROP CONSTRAINT constraint_name;

-- Now clone
CREATE TABLE target_table DEEP CLONE source_table;

-- Re-add constraints on the target
ALTER TABLE target_table ADD CONSTRAINT constraint_name CHECK (expression);

2. Create the target table with schema, then INSERT

Instead of DEEP CLONE, you can recreate the table structure and copy the data:

-- Get the schema from the source
DESCRIBE TABLE source_table;

-- Create the target table with the same schema (including NOT NULL)
CREATE TABLE target_table (
col1 BIGINT NOT NULL,
col2 STRING NOT NULL,
-- ... match your source schema
);

-- Copy the data
INSERT INTO target_table SELECT * FROM source_table;

-- Re-add any CHECK constraints
ALTER TABLE target_table ADD CONSTRAINT chk_name CHECK (expression);

Note: this approach does not carry over IDENTITY column auto-increment state. If you need IDENTITY columns, you will need to set the seed value appropriately after creation.

3. Use CTAS without NOT NULL, then alter

CREATE TABLE target_table AS SELECT * FROM source_table;

-- Then add NOT NULL constraints
ALTER TABLE target_table ALTER COLUMN col1 SET NOT NULL;
ALTER TABLE target_table ALTER COLUMN col2 SET NOT NULL;
-- etc.

4. Run the clone on a DBR 13.3 LTS cluster

As a short-term option, you can run the DEEP CLONE command on a cluster running DBR 13.3 LTS where this works correctly. This lets you complete the clone while waiting for a fix in a newer runtime.

CHECKING YOUR TABLE'S CONSTRAINTS

To see exactly which constraints your table has stored, run:

SHOW TBLPROPERTIES source_table;

Look for entries like:

delta.constraints.isnotnull_col1 = isNotNull(col1)
delta.constraints.some_check     = (col1 > 0)

These are the expressions that the clone operation tries to re-validate.

REPORTING THE ISSUE

Since DEEP CLONE should handle internal constraint expressions across runtime versions, this is worth reporting to Databricks Support so the engineering team can track it. When filing the support ticket, include:
- The exact DBR version (16.4.x-scala2.12)
- The source table's DESCRIBE DETAIL and SHOW TBLPROPERTIES output
- The full error stack trace

This will help the team pinpoint whether the fix needs to go into the Delta clone logic or the Spark SQL function registry.

* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.

If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.