cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Using SQL for Structured Streaming

chloeh
New Contributor II

Hi!

I'm new to Databricks. I'm trying to create a data pipeline with structured streaming. A minimal example data pipeline would look like: read from upstream Kafka source, do some data transformation, then write to downstream Kafka sink. I want to do as much of this in SQL as possible, but I'm encountering some issues.

1. My understanding is that creating sources and sinks via raw SQL is not supported in Spark, is that true?

2. I found a new `read_kafka` table-valued function in Databricks SQL, but I can't seem to be able to use it in the community edition. It's giving me ```could not resolve `read_kafka` to a table-valued function.```. Is creating sources and sinks using raw SQL only available in the enterprise version of Databricks SQL (i.e., it's not supported in Spark SQL or the community version)?

3. Is WATERMARK clause in SQL only supported in Databricks SQL, not Spark SQL?

4. In general, is there a different in support between Databricks SQL in community edition vs Databricks SQL in enterprise edition?

Thank you in advance!

1 REPLY 1

chloeh
New Contributor II

Ok I figured out why I was getting an error on the usage of `read_kafka`. My default cluster was set up with the wrong Databricks runtime

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.