cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Zerobus Kafka-compatible API

developer3535
New Contributor II

Hi Team,

I went through a recording where it was mentioned that a Kafka‑compatible API is planned for a Beta release in Q1. Do we have any rough timeline on when this feature might be available?

We already have Kafka producer topics, and we would like to connect them directly to Zerobus instead of building a Kafka consumer and then streaming the data to Zerobus.

developer3535_0-1771491074468.png

 

2 ACCEPTED SOLUTIONS

Accepted Solutions

stbjelcevic
Databricks Employee
Databricks Employee

Hi @developer3535,

Q1 2026 is as granular as we can share at this point.

Based on what you said, the Beta should enable your existing Kafka producers to write directly to Delta with minimal changes.

View solution in original post

SteveOstrowski
Databricks Employee
Databricks Employee

Hi @developer3535,

I see @stbjelcevic already confirmed the Q1 2026 timeline for the Kafka-compatible API Beta. I wanted to add some context on what you can do in the meantime and where to look for updates.

CURRENT ZEROBUS INGEST INTERFACES

While waiting for the Kafka-compatible API, Zerobus Ingest currently supports two interfaces:

1. gRPC (via native SDKs): Best for high-throughput, persistent-connection workloads. SDKs are available in Python, Rust, Java, Go, and TypeScript. The Python SDK uses PyO3 bindings to the Rust core, delivering up to 40x higher throughput than pure Python.

2. REST API (currently in Beta): Stateless HTTP POST to the /zerobus/v1/tables/<table-name>/insert endpoint. This is a good fit for massive fleets of low-frequency devices or languages without a native SDK.

Both support JSON and Protocol Buffers (recommended for production).

BRIDGING YOUR KAFKA PRODUCERS TODAY

If you need to start flowing Kafka data into Delta tables before the Kafka-compatible API lands, one interim approach is to use Structured Streaming with the Kafka source to read from your existing Kafka topics and write to Delta:

spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "<your-brokers>") \
.option("subscribe", "<your-topics>") \
.load() \
.selectExpr("CAST(value AS STRING) as raw_value") \
.writeStream \
.format("delta") \
.option("checkpointLocation", "/path/to/checkpoint") \
.toTable("catalog.schema.target_table")

This gives you continuous ingestion from Kafka into Delta using Spark, which you can later simplify once the Kafka-compatible API becomes available and your producers can write directly to Zerobus.

WHAT THE KAFKA-COMPATIBLE API SHOULD ENABLE

Based on what has been shared publicly, the Beta should allow your existing Kafka producers to write directly to Delta tables with minimal code changes, meaning you would point your Kafka producer configuration at a Zerobus endpoint instead of a Kafka broker and data flows directly into Delta without needing an intermediate consumer.

WHERE TO FIND DOCUMENTATION

The Zerobus Ingest connector documentation is available here:
https://learn.microsoft.com/en-us/azure/databricks/ingestion/zerobus-ingest

And the limitations/region availability page:
https://learn.microsoft.com/en-us/azure/databricks/ingestion/zerobus-limits

The AWS-equivalent docs are also available at:
https://docs.databricks.com/aws/en/ingestion/zerobus-ingest

Keep an eye on these docs pages and the Databricks release notes for announcements about the Kafka-compatible API as it becomes available.

ENROLLMENT NOTE

Zerobus is currently in gated Public Preview. If your workspace is not yet enrolled, you will need to reach out to your Databricks account team to request enrollment before you can use any Zerobus features, including the Kafka-compatible API once it launches.

* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.

If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.

View solution in original post

2 REPLIES 2

stbjelcevic
Databricks Employee
Databricks Employee

Hi @developer3535,

Q1 2026 is as granular as we can share at this point.

Based on what you said, the Beta should enable your existing Kafka producers to write directly to Delta with minimal changes.

SteveOstrowski
Databricks Employee
Databricks Employee

Hi @developer3535,

I see @stbjelcevic already confirmed the Q1 2026 timeline for the Kafka-compatible API Beta. I wanted to add some context on what you can do in the meantime and where to look for updates.

CURRENT ZEROBUS INGEST INTERFACES

While waiting for the Kafka-compatible API, Zerobus Ingest currently supports two interfaces:

1. gRPC (via native SDKs): Best for high-throughput, persistent-connection workloads. SDKs are available in Python, Rust, Java, Go, and TypeScript. The Python SDK uses PyO3 bindings to the Rust core, delivering up to 40x higher throughput than pure Python.

2. REST API (currently in Beta): Stateless HTTP POST to the /zerobus/v1/tables/<table-name>/insert endpoint. This is a good fit for massive fleets of low-frequency devices or languages without a native SDK.

Both support JSON and Protocol Buffers (recommended for production).

BRIDGING YOUR KAFKA PRODUCERS TODAY

If you need to start flowing Kafka data into Delta tables before the Kafka-compatible API lands, one interim approach is to use Structured Streaming with the Kafka source to read from your existing Kafka topics and write to Delta:

spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "<your-brokers>") \
.option("subscribe", "<your-topics>") \
.load() \
.selectExpr("CAST(value AS STRING) as raw_value") \
.writeStream \
.format("delta") \
.option("checkpointLocation", "/path/to/checkpoint") \
.toTable("catalog.schema.target_table")

This gives you continuous ingestion from Kafka into Delta using Spark, which you can later simplify once the Kafka-compatible API becomes available and your producers can write directly to Zerobus.

WHAT THE KAFKA-COMPATIBLE API SHOULD ENABLE

Based on what has been shared publicly, the Beta should allow your existing Kafka producers to write directly to Delta tables with minimal code changes, meaning you would point your Kafka producer configuration at a Zerobus endpoint instead of a Kafka broker and data flows directly into Delta without needing an intermediate consumer.

WHERE TO FIND DOCUMENTATION

The Zerobus Ingest connector documentation is available here:
https://learn.microsoft.com/en-us/azure/databricks/ingestion/zerobus-ingest

And the limitations/region availability page:
https://learn.microsoft.com/en-us/azure/databricks/ingestion/zerobus-limits

The AWS-equivalent docs are also available at:
https://docs.databricks.com/aws/en/ingestion/zerobus-ingest

Keep an eye on these docs pages and the Databricks release notes for announcements about the Kafka-compatible API as it becomes available.

ENROLLMENT NOTE

Zerobus is currently in gated Public Preview. If your workspace is not yet enrolled, you will need to reach out to your Databricks account team to request enrollment before you can use any Zerobus features, including the Kafka-compatible API once it launches.

* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.

If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.