- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-15-2025 02:35 PM
Hello Brahma,
I wanted to check if what I am encountering as an error is really due to use of Serverless computing for my compute setup of DLT pipeline. I am trying to work with - Apply changes API code in a pipeline and seem to invariably run into couple of issues - 1. The serverless compute issue 2.The Quota exhaustion( for a dedicated Job compute) error.
The reason I am using serverless and not a dedicated Job compute is because i was constanting getting error msgs regarding exhausted quota for my region(I'm using 14 day trial premium), so i switched to serverless computing, but then I get below msg when I am starting my pipeline. My AI help says its because of severless compute does not support Apply changes API? I will appreciate your input in this regard.
Error Msg -
pyspark.errors.exceptions.base.PySparkAttributeError: Traceback (most recent call last):
File "/Delta Live Tables/star_pipeline", cell 7, line 16, in scd_customers
.apply_changes(
^^^^^^^^^^^^^
pyspark.errors.exceptions.base.PySparkAttributeError: [ATTRIBUTE_NOT_SUPPORTED] Attribute `apply_changes` is not supported.
My Gold layer code -