cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

DLT Pipeline Failed to create new KafkaAdminClient SQLSTATE: XXKST:

Hanfo2back
New Contributor

I encountered the error: No LoginModule found for org.apache.kafka.common.security.scram.ScramLoginModule while consuming data from Kafka using a Databricks pipeline. The pipeline had been running smoothly before, but the error appeared on September 11. Upon quick investigation, I noticed that the DLT (Delta Live Table) compute cluster is missing the "scram login" module. I tried installing the Maven library (org.apache.spark:spark-sql-kafka-0-10_2.12:3.5.2) on an all-purpose cluster, which fixed the issue. However, the DLT compute cluster does not allow Maven library installation. How can I resolve this issue?

1 ACCEPTED SOLUTION

Accepted Solutions

Advika
Databricks Employee
Databricks Employee

Hello @Hanfo2back!

Can you please try changing SASL login string to use kafkashaded.org.apache.kafka.common.security.scram.ScramLoginModule instead of org.apache.kafka.common.security.scram.ScramLoginModule.

View solution in original post

5 REPLIES 5

ManojkMohan
Valued Contributor III

Root Cause:

Required Kafka SCRAM login module JAR is missing from the managed DLT cluster environment. This typically happens after Databricks platform updates or cluster environment changes, which may remove previously available libraries from the runtime

Solution / Workaroundss:

Move the Kafka-consuming or -producing step out of DLT into a downstream Databricks notebook or Workflow task on an all-purpose cluster (where required libraries can be installed).

Example: First use DLT for ETL/transformation, write results to a Delta table, then use a separate notebook to read the table and interact with Kafka securely. https://community.databricks.com/t5/data-engineering/delta-live-tables-stream-output-to-kafka/td-p/7...

Thanks @ManojkMohanWill this issue be fixed in the future? Currently, all downstream streaming tables are using DLT. Separating the Kafka consumer to use Workflow with a non-DLT compute cluster raises concerns about the complexity of organizing the data flow, especially since it is already operating in a production environment.

ManojkMohan
Valued Contributor III

For production teams already relying heavily on streaming tables within DLT, introducing a split pipeline with downstream all-purpose clusters can indeed increase orchestration complexity and operational overhead. https://www.databricks.com/blog/2025-dlt-update-intelligent-fully-governed-data-pipelines  @Sujitha  @Advika  request your view on ongoing Databricks platform releases for news on native DLT Kafka improvements

if strict separation causes issues, you may want to assess if an interim solution (e.g., using Delta tables as integration boundaries with workflow automation) balances resilience and manageability for your production setup 

Advika
Databricks Employee
Databricks Employee

Hello @Hanfo2back!

Can you please try changing SASL login string to use kafkashaded.org.apache.kafka.common.security.scram.ScramLoginModule instead of org.apache.kafka.common.security.scram.ScramLoginModule.

Many thanks @Advika