Hello Brahma,

I wanted to check if what I am encountering as an error is really due to use of Serverless computing for my compute setup of DLT pipeline. I am trying to work with - Apply changes API code in a pipeline and seem to invariably run into couple of issues - 1. The serverless compute issue 2.The Quota exhaustion( for a dedicated Job compute) error.

The reason I am using serverless and not a dedicated Job compute is because i was constanting getting error msgs regarding exhausted quota for my region(I'm using 14 day trial premium), so i switched to serverless computing, but then I get below msg when I am starting my pipeline. My AI help says its because of severless compute does not support Apply changes API? I will appreciate your input in this regard.

Error Msg -

pyspark.errors.exceptions.base.PySparkAttributeError: Traceback (most recent call last):
File "/Delta Live Tables/star_pipeline", cell 7, line 16, in scd_customers
.apply_changes(
^^^^^^^^^^^^^

pyspark.errors.exceptions.base.PySparkAttributeError: [ATTRIBUTE_NOT_SUPPORTED] Attribute `apply_changes` is not supported.

My Gold layer code -

 

@dlt.table(
    name="gold_customers",
    comment="SCD Type 2 implementation for customers using apply_changes",
    table_properties={"quality": "gold"}
)
def scd_customers():
    return (
        dlt.read("customers_silver")  
        .apply_changes(
            target="gold_customers",
            keys=["customer_id"],
            sequence_by="updated_at",  
            stored_as_scd_type=2
        )
    )