- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2026 01:22 AM - edited 03-03-2026 01:26 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2026 04:11 AM
Greetings @dplatform_user , I did some digging and found a few helpful hints/tips for you to consider.
What's happening
You're hitting UNRESOLVED_ROUTINE: Cannot resolve routine isNotNull on DBR 16.4 during a DEEP CLONE. Same clone works on 13.3. Simpler tables are fine.
This is a known 16.x bug — not a missing function, not a UC permissions issue. On 16.x, Spark sessions can have their in-memory function registry cleared and then get reused. When Delta's internal clone path tries to invoke built-ins like isNotNull (for stats collection, constraint validation, etc.), the planner can't find them. Fixes are in 17.2+ with backports planned for 16.4 via a feature flag.
Rule out the simple stuff first
Run these on the same 16.4 cluster in a fresh notebook:
SELECT isnotnull(1);
SHOW FUNCTIONS LIKE 'isNotNull';
If isnotnull(1) works and the only result from SHOW FUNCTIONS is a system.builtin entry, you're in the known bug bucket.
Workarounds
-
Run the clone from a fresh job cluster, not a long-lived all-purpose cluster that's been running other jobs or retries.
-
Test on 17.2+ if available. If it works there and not on 16.4, that confirms the bug and strengthens your case for an ES ticket.
-
If you need the copy now and don't need incremental refresh, use CTAS and manually add back constraints:
CREATE TABLE target AS SELECT * FROM source;
Hope this helps, Louis
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2026 10:53 PM
Hi @dplatform_user,
This error occurs because of how NOT NULL constraints are internally represented in Delta table metadata. When a Delta table has NOT NULL columns, the Delta protocol stores these as CHECK constraints using expressions like isNotNull(column_name) in the transaction log (under the delta.constraints.* table properties). On DBR 13.3, the Spark SQL analyzer recognized isNotNull as a valid internal expression during the metadata copy phase of DEEP CLONE. In DBR 16.4 (which uses a newer Spark version), the SQL function resolution path has changed, and isNotNull is no longer recognized as a resolvable routine in the standard search path, which is exactly what the error message tells you: it cannot find isNotNull in system.builtin or system.session.
This specifically affects tables that have NOT NULL constraints combined with features like IDENTITY columns, generated columns, or explicit CHECK constraints, because those features elevate the writer protocol version (writer version 6 in your case) and cause the constraint metadata to be validated more strictly during clone operations.
WORKAROUND OPTIONS
1. Drop and re-add the CHECK constraints before cloning
If your source table has explicit CHECK constraints (beyond the implicit NOT NULL ones), you can temporarily drop them, perform the clone, and then re-add them on the target table:
-- List current constraints SHOW TBLPROPERTIES source_table; -- Look for properties starting with delta.constraints.* -- Drop any explicit CHECK constraints ALTER TABLE source_table DROP CONSTRAINT constraint_name; -- Now clone CREATE TABLE target_table DEEP CLONE source_table; -- Re-add constraints on the target ALTER TABLE target_table ADD CONSTRAINT constraint_name CHECK (expression);
2. Create the target table with schema, then INSERT
Instead of DEEP CLONE, you can recreate the table structure and copy the data:
-- Get the schema from the source DESCRIBE TABLE source_table; -- Create the target table with the same schema (including NOT NULL) CREATE TABLE target_table ( col1 BIGINT NOT NULL, col2 STRING NOT NULL, -- ... match your source schema ); -- Copy the data INSERT INTO target_table SELECT * FROM source_table; -- Re-add any CHECK constraints ALTER TABLE target_table ADD CONSTRAINT chk_name CHECK (expression);
Note: this approach does not carry over IDENTITY column auto-increment state. If you need IDENTITY columns, you will need to set the seed value appropriately after creation.
3. Use CTAS without NOT NULL, then alter
CREATE TABLE target_table AS SELECT * FROM source_table; -- Then add NOT NULL constraints ALTER TABLE target_table ALTER COLUMN col1 SET NOT NULL; ALTER TABLE target_table ALTER COLUMN col2 SET NOT NULL; -- etc.
4. Run the clone on a DBR 13.3 LTS cluster
As a short-term option, you can run the DEEP CLONE command on a cluster running DBR 13.3 LTS where this works correctly. This lets you complete the clone while waiting for a fix in a newer runtime.
CHECKING YOUR TABLE'S CONSTRAINTS
To see exactly which constraints your table has stored, run:
SHOW TBLPROPERTIES source_table;
Look for entries like:
delta.constraints.isnotnull_col1 = isNotNull(col1) delta.constraints.some_check = (col1 > 0)
These are the expressions that the clone operation tries to re-validate.
REPORTING THE ISSUE
Since DEEP CLONE should handle internal constraint expressions across runtime versions, this is worth reporting to Databricks Support so the engineering team can track it. When filing the support ticket, include:
- The exact DBR version (16.4.x-scala2.12)
- The source table's DESCRIBE DETAIL and SHOW TBLPROPERTIES output
- The full error stack trace
This will help the team pinpoint whether the fix needs to go into the Delta clone logic or the Spark SQL function registry.
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.
If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.