Using SQL for Structured Streaming
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-17-2024 12:23 PM
Hi!
I'm new to Databricks. I'm trying to create a data pipeline with structured streaming. A minimal example data pipeline would look like: read from upstream Kafka source, do some data transformation, then write to downstream Kafka sink. I want to do as much of this in SQL as possible, but I'm encountering some issues.
1. My understanding is that creating sources and sinks via raw SQL is not supported in Spark, is that true?
2. I found a new `read_kafka` table-valued function in Databricks SQL, but I can't seem to be able to use it in the community edition. It's giving me ```could not resolve `read_kafka` to a table-valued function.```. Is creating sources and sinks using raw SQL only available in the enterprise version of Databricks SQL (i.e., it's not supported in Spark SQL or the community version)?
3. Is WATERMARK clause in SQL only supported in Databricks SQL, not Spark SQL?
4. In general, is there a different in support between Databricks SQL in community edition vs Databricks SQL in enterprise edition?
Thank you in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-18-2024 11:29 AM
Ok I figured out why I was getting an error on the usage of `read_kafka`. My default cluster was set up with the wrong Databricks runtime

