- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Hi there,
Wondering if anyone can help me. I have had a job set up to stream from one change data feed enabled delta table to another delta table and has been executing successfully. I then added column masks to the source table from which I am streaming and get the the following error:
[UNSUPPORTED_FEATURE.TABLE_OPERATION] The feature is not supported: Table [source_table_name] does not support either micro-batch or continuous scan. Please check the current catalog and namespace to make sure the qualified table name is expected, and also check the catalog implementation which is configured by "spark.sql.catalog". SQLSTATE: 0A000
change_stream = spark.readStream.format("delta").option("readChangeFeed", "true")
change_stream.table(source_table)
.writeStream.option("checkpointLocation", checkpoints_location)
.outputMode("append")
.option("mergeSchema", False)
.trigger(availableNow=True)
.toTable(target_table)
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
Hello Mate,
Hope doing great,
you can configure a service principle in that case, add proper roles as per needs and use as run owner. Re_run the stream so that your PII will not be able to display to other teams/persons until having the member.
Simple words your are not masking but not restricting other teams directly to avoid the stream failures with mask objects. If not help please ignore.
Thanks for the ask.
Saran
Saran

