Hi there, I read data from Azure Event Hub and after manipulating with data I write the dataframe back to Event Hub (I use this connector for that): #read data
df = (spark.readStream
.format("eventhubs")
.options(**ehConf)
...
Debayan, thanks for your recommendation, I read this article, but it does not answer my question. I'm just learning how to work with Databricks, and perhaps these costs are normal for structured stream processing?