Our input data resides in a Kafka topic, and we utilize the Kafka schema registry with Avro schemas. While I can retrieve the schema from the registry, I am facing challenges creating a Spark DataFrame that correctly serializes data for streaming reads. Could you provide guidance on efficiently using Avro schemas from the schema registry to create Spark DataFrames for streaming reads? The aim would be to have this functionality modular as possible. I want be able to parameterize the function/notebook and pass the topic and the schema on the registry and have it create the stream that reads it and serializes it correctly into a delta live table.