Use Case:
- Kafka real time steaming network telemetry logs
- In a real use case approx. 40 TB of data can be real time streamed in a day
Architecture:

issue encountered:  when i try to simulate kakfa like streaming in databricks itself , as this is a free editon and i am using serverless compute i am getting error
[INFINITE_STREAMING_TRIGGER_NOT_SUPPORTED] Trigger type ProcessingTime is not supported for this cluster type. Use a different trigger type e.g. AvailableNow, Once. SQLSTATE: 0A000
Code:
from pyspark.sql.functions import col, expr
# Generate 1000 rows per second
streamingDF = (spark.readStream
               .format("rate")
               .option("rowsPerSecond", 1000)
               .load())
# Add simulated IoT telemetry fields
telemetryDF = streamingDF.withColumn("tower_id", (col("value") % 50_000)) \
                         .withColumn("metric", expr("rand() * 100")) \
                         .withColumn("region", expr("CASE WHEN tower_id % 5 = 0 THEN 'North' ELSE 'South' END"))
# Write to Delta (simulated Bronze table)
query = (telemetryDF.writeStream
         .format("delta")
         .outputMode("append")
         .option("checkpointLocation", "/tmp/telemetry_checkpoints")
         .option("path", "/tmp/telemetry_bronze")
         .start())
Any suggestions for a work around