Hi @ShankarM ,
This is a known limitation with Unity Catalog column masking policies: write operations such as MERGE INTO and INSERT are not supported on tables that have a column mask policy applied. When your ingestion job tries to load data into a table with a masked column, Databricks blocks the write at the engine level โ this is the error you're seeing.
There are two common root causes to check:
1. Compute access mode mismatch
Row filters and column masks require your cluster to run in Shared access mode (not Single User / Assigned mode). If you're using a Single User cluster, upgrade it to Shared access mode or use a SQL Warehouse instead: Access Mode -> Shared.
2. Unsupported write operation
If your incremental load uses MERGE INTO, this is the likely culprit. MERGE is not supported on tables with column mask policies. Consider these alternatives:
- Option A โ Use streaming append: If new rows are only appended (no upserts), use Delta streaming:(df.writeStream.format("delta").option("checkpointLocation", "/path/to/checkpoint") .outputMode("append") .toTable("catalog.schema.your_history_table"))
- Option B โ Stage then merge without the mask: Load data into a staging table (without a mask policy), run your MERGE there, then copy final results into the masked table using INSERT with a service principal that has the UNMASK privilege โ avoiding MERGE on the masked table directly.
- Option C โ Use INSERT OVERWRITE with partitions: For partition-based incremental loads, INSERT OVERWRITE on specific partitions is supported even on masked tables.
The column mask policy is enforced at read time (users without privilege see XXXX), but certain write paths are restricted to prevent policy bypass. The recommended long-term pattern is to keep masking at the read layer and use unmasked staging tables for ingestion pipelines.
References: