Hello all.
We are a new team implementing DLT and have setup a number of tables in a pipeline loading from s3 with UC as the target. I'm noticing that if any of the 20 or so tables fail to load, the entire pipeline fails even when there are no dependencies between the tables. In our case, a new table was added to the DLT notebook but the source s3 directory is empty. This has caused the pipeline to fail with error "org.apache.spark.sql.catalyst.ExtendedAnalysisException: Unable to process statement for Table 'table_name'.
Is there a way to change this behavior in the pipeline configuration so that one table failing doesn't impact the rest of the pipeline?