I have created a notebook for my Delta Live Table pipeline and it runs without errors however if I run the notebook alone in my cluster it, says not allowed and show this error. Does it mean I can only run delta live table in the pipeline and cannot run locally on a cluster? because if I run it via DLT it will cost me extra money to do so.
import dlt
from pyspark.sql.functions import *
from pyspark.sql.functions import col, trim, split, concat, current_timestamp, row_number, expr, hash as pyspark_hash
from pyspark.sql.window import Window
The Delta Live Tables (DLT) module is not supported on this cluster. You should either create a new pipeline or use an existing pipeline to run DLT code.