Hey @rt-slowth
Adding to @Retired_mod's points and refereeing to your bronze and silver code you posted in questions-about-the-design-of-bronze-silver-and-gold-for-live post. Looks like you are doing a SCD operation on your bronze layer which explains why the pipeline is erroring out in the subsequent run. To over come this you can try as explained by Kaniz in her post.
Also, the code to switch to live tables is using readStream and writeStream functions in your code.
For writing the streaming table:
query = transformed_stream \
.writeStream \
.format("delta") \
.option("checkpointLocation", "/databricks/dbfs/checkpoints") \
.trigger(processingTime="5 minutes") \
.start("/databricks/dbfs/live_tables/my_live_table")
For reading a streaming table:
source_stream = spark.readStream \
.format("delta") \
.option("path", "/databricks/dbfs/mnt/landing_zone") \
.option("processingTime", "1 minute") \
.load()
Leave a like if this helps! Kudos,
Palash