Hi there,
Wondering if anyone can help me. I have had a job set up to stream from one change data feed enabled delta table to another delta table and has been executing successfully. I then added column masks to the source table from which I am streaming and get the the following error:
[UNSUPPORTED_FEATURE.TABLE_OPERATION] The feature is not supported: Table [source_table_name] does not support either micro-batch or continuous scan. Please check the current catalog and namespace to make sure the qualified table name is expected, and also check the catalog implementation which is configured by "spark.sql.catalog". SQLSTATE: 0A000
change_stream = spark.readStream.format("delta").option("readChangeFeed", "true")
change_stream.table(source_table)
.writeStream.option("checkpointLocation", checkpoints_location)
.outputMode("append")
.option("mergeSchema", False)
.trigger(availableNow=True)
.toTable(target_table)