Hi @Lakshay
Without structured streaming, I can able to create the tempview and make use of it in the same notebook.
But for the same, when I try to use in structured streaming, it is throwing the error message that view not found.
please refer the below sample code snippet for reference,
write_to_final_table = (spark.readStream.format('delta').option('ignoreChanges',True).table(f"{delta_table_name}")).writeStream.queryName(f"{query_name}").format("org.elasticsearch.spark.sql").trigger(processingTime=f'1 minutes').outputMode("append").foreachBatch(process_micro_batch).option("checkpointLocation",checkpointdirectory_path).option("mergeSchema", "true").option("failOnDataLoss", "false").start()
def process_micro_batch(micro_batch_df, batchId) :
micro_batch_df.createOrReplaceTempView("temp_view")
df = spark.sql(f"select * from temp_view")
return df
Here, I am getting error, while reading data from temp_view that temp_view not found error.
Thanks