I'm very disappointed with this framework. The documentation is inadequate, and it has many limitations. I want to run materialized views with incremental updates, but DLT insists on performing a full recompute. Why is it doing this? Here is the log from a random test execution: { "planning_information":{ "technique_information":[ { "incrementalization_issues":[ { "issue_type":"INCREMENTAL_PLAN_REJECTED_BY_COST_MODEL", "prevent_incrementalization":true, "cost_model_rejection_subtype":"CHANGESET_SIZE_THRESHOLD_EXCEEDED" } ] }, { "maintenance_type":"MAINTENANCE_TYPE_COMPLETE_RECOMPUTE", "is_chosen":true, "is_applicable":true, "cost":2163.0 }, { "maintenance_type":"MAINTENANCE_TYPE_ROW_BASED", "is_chosen":false, "is_applicable":true, "cost":616.0 } ], "source_table_information":[ { "table_name":"`mul_dev_tests`.`dlt_managed`.`teste`", "table_id":"810f74f2-fc09-45b6-93f5-2a544ac93002", "full_size":2950.0, "change_size":710.0, "is_size_after_pruning":true, "is_row_id_enabled":true, "is_cdf_enabled":true, "is_deletion_vector_enabled":true, "is_change_from_legacy_cdf":false } ], "target_table_information":{ "table_name":"`mul_dev_tests`.`default`.`teste_novo_99`", "table_id":"8bc37e86-6cf7-4e92-a69a-85f5da7e1099", "full_size":1320.0, "is_row_id_enabled":true, "is_cdf_enabled":true, "is_deletion_vector_enabled":true } } } It states that the cost of running the incremental update is too high, but the incremental process is FOUR TIMES faster than a full recompute. Please note that I'm using a small dataset for this example, but with large tables, the issue becomes significant. Furthermore, this error is not documented anywhere. Yes, I'm using a serverless setup, which is indeed fast, but it is also a complete black box.