cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

GANAPATI_HEGDE
by New Contributor III
  • 60 Views
  • 2 replies
  • 0 kudos

Unable to configure custom compute for DLT pipeline

I am trying to configure cluster for a pipeline like above, However dlt keeps using the small cluster as usual, how to resolve this? 

GANAPATI_HEGDE_0-1762754316899.png GANAPATI_HEGDE_1-1762754398253.png
  • 60 Views
  • 2 replies
  • 0 kudos
Latest Reply
GANAPATI_HEGDE
New Contributor III
  • 0 kudos

i updated my CLI and deployed the job, still i dont see the clusters updates in  pipeline

  • 0 kudos
1 More Replies
sparmar
by New Contributor
  • 3555 Views
  • 1 replies
  • 0 kudos

I am Getting SSLError(SSLEOFError) error while triggering Azure DevOps pipeline from Databricks

While triggering Azure devOps pipleline from Databricks, I am getting below error:An error occurred: HTTPSConnectionPool(host='dev.azure.com', port=443): Max retries exceeded with url: /XXX-devops/XXXDevOps/_apis/pipelines/20250224.1/runs?api-version...

  • 3555 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The error you’re seeing (SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1147)')) while triggering the Azure DevOps pipeline from Databricks indicates an issue with the SSL/TLS handshake, not the firewall or certificate itself. This is ...

  • 0 kudos
Amit_Dass_Chmp
by New Contributor III
  • 3039 Views
  • 1 replies
  • 0 kudos

query on Databricks Arc :ARC will not work on 13.x or greater runtime

I have a query on Databricks Arc , is this statement true - Databricks Runtime Requirements for implementing Arc:ARC requires Databricks ML Runtime 12.2LTS. ARC will not work on 13.x or greater runtime

  • 3039 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The statement is true: Databricks Arc requires the Databricks ML Runtime 12.2 LTS and will not work on 13.x or greater runtimes. This requirement is confirmed by multiple Databricks Community discussions and documentation, which specifically state th...

  • 0 kudos
j_h_robinson
by New Contributor II
  • 3110 Views
  • 1 replies
  • 0 kudos

GitHub CI/CD Best Practices

Using GitHub, what are some best-practice CI/CD approaches to use specifically with the silver and gold medallion layers? We want to create the bronze, silver, and gold layers in Databricks notebooks.Also, is using notebooks in production a "best pra...

  • 3110 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

For Databricks projects using the medallion architecture (bronze, silver, gold layers), effective CI/CD strategies on GitHub include strict version control, environment isolation, automated testing and deployments, and careful notebook management—all...

  • 0 kudos
SObiero
by New Contributor
  • 3336 Views
  • 1 replies
  • 0 kudos

Passing Microsoft MFA Auth from Databricks to MSSQL Managed Instance in a Databricks FastAPI App

I have a Databricks App built using FastAPI. Users access this App after authenticating with Microsoft MFA on Databricks Azure Cloud. The App connects to an MSSQL Managed Instance (MI) that also supports Microsoft MFA.I want the authenticated user's ...

  • 3336 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

It is not possible in Databricks to seamlessly pass each authenticated user's Azure/MS identity from a web app running on Databricks to MSSQL MI for per-user MFA authentication, in the way your development code does. This limitation stems from how id...

  • 0 kudos
kanikeom
by New Contributor II
  • 3625 Views
  • 2 replies
  • 2 kudos

Asset Bundle API update issues

I was working on a proof of concept (POC) using the assert bundle. My job configuration in the .yml file worked yesterday, but it threw an error today during a demo to the team.The error was likely due to an update to the Databricks API. After some t...

  • 3625 Views
  • 2 replies
  • 2 kudos
Latest Reply
mark_ott
Databricks Employee
  • 2 kudos

Unexpected breaking changes to APIs—especially from cloud platforms like Databricks—can disrupt projects and demos. Proactively anticipating and rapidly adapting to such updates requires a combination of monitoring, process improvements, and technica...

  • 2 kudos
1 More Replies
jeremy98
by Honored Contributor
  • 3333 Views
  • 2 replies
  • 0 kudos

if else condition task doubt

Hi community,The if else condition task couldn't be used as real if condition? Seems that if the condition goes to False the entire job will be stop. Is it a right behaviour?

  • 3333 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

In Databricks workflows, the "if-else" condition and depends_on logic do not behave exactly like standard programming if-else statements. If a task depends on another task's outcome and that outcome does not match (for example, the condition is false...

  • 0 kudos
1 More Replies
Carl_B
by New Contributor II
  • 3770 Views
  • 1 replies
  • 0 kudos

ImportError: cannot import name 'override' from 'typing_extensions'

Hello,I'm facing an ImportError when trying to run my OpenAI-based summarization script in.The error message is:ImportError: cannot import name 'override' from 'typing_extensions' (/databricks/python/lib/python3.10/site-packages/typing_extensions.py)...

  • 3770 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

This error is caused by a version mismatch between the OpenAI Python package and the typing_extensions library in your Databricks environment. The 'override' symbol is relatively new and only exists in typing_extensions version 4.5.0 and above; some ...

  • 0 kudos
SQLBob
by New Contributor II
  • 3519 Views
  • 2 replies
  • 0 kudos

Unity Catalog Python UDF to Send Messages to MS Teams

Good Morning All - This didn't seem like such a daunting task until I tried it. Of course, it's my very first function in Unity Catalog. Attached are images of both the UDF and example usage I created to send messages via the Python requests library ...

  • 3519 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

You're encountering a common limitation when trying to use an external HTTP request (like the Python requests library) inside a Unity Catalog UDF in Databricks. While your code is correct for a regular notebook environment, Unity Catalog UDFs (and, s...

  • 0 kudos
1 More Replies
jash281098
by New Contributor II
  • 3036 Views
  • 2 replies
  • 0 kudos

Issues when adding keystore spark config for pyspark to mongo atlas X.509 connectivity

Step followed - Step1: To add init script that will copy the keystore file in the tmp location.Step2: To add spark config in cluster advance options - spark.driver.extraJavaOptions -Djavax.net.ssl.keyStore=/tmp/keystore.jks -Djavax.net.ssl.keyStorePa...

  • 3036 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

To achieve MongoDB Atlas X.509 connectivity from Databricks using PySpark, the standard keystore configuration may fail due to certificate, configuration, or driver method issues. The recommended approach involves several key steps, including properl...

  • 0 kudos
1 More Replies
der
by Contributor II
  • 200 Views
  • 6 replies
  • 2 kudos

EXCEL_DATA_SOURCE_NOT_ENABLED Excel data source is not enabled in this cluster

I want to read an Excel xlsx file on DBR 17.3. On the Cluster the library dev.mauch:spark-excel_2.13:4.0.0_0.31.2 is installed. V1 Implementation works fine:df = spark.read.format("dev.mauch.spark.excel").schema(schema).load(excel_file) display(df)V2...

  • 200 Views
  • 6 replies
  • 2 kudos
Latest Reply
mmayorga
Databricks Employee
  • 2 kudos

hi @der  First of all thank you for your patience and for providing more information about your case. Use of ".format("excel")" I replicated equally your cluster config in Azure. Without installing any library, I was able to run and load the xlsx fil...

  • 2 kudos
5 More Replies
GJ2
by New Contributor II
  • 10520 Views
  • 12 replies
  • 2 kudos

Install the ODBC Driver 17 for SQL Server

Hi,I am not a Data Engineer, I want to connect to ssas. It looks like it can be connected through pyodbc. however looks like  I need to install "ODBC Driver 17 for SQL Server" using the following command. How do i install the driver on the cluster an...

GJ2_1-1739798450883.png
  • 10520 Views
  • 12 replies
  • 2 kudos
Latest Reply
Coffee77
Contributor
  • 2 kudos

If you only need to interact with your cloud SQL database, I recommend you use simple code like displayed below for running select queries. To write would be very similar. Take a look here: https://learn.microsoft.com/en-us/sql/connect/spark/connecto...

  • 2 kudos
11 More Replies
73334
by New Contributor II
  • 3804 Views
  • 3 replies
  • 1 kudos

Dedicated Access Mode Interactive Cluster with a Service Principal

Hi, I am wondering if it is possible to set up an interactive cluster set to dedicated access mode and having that user be a machine user?I've tried the cluster creation API, /api/2.1/clusters/create, and set the user name to the service principal na...

  • 3804 Views
  • 3 replies
  • 1 kudos
Latest Reply
Coffee77
Contributor
  • 1 kudos

It turns out that now is possible to include deployment of interactive and SQL Warehouse clusters with Databricks Asset Bundles, so you can include a YAML file similar to this one to deploy that type of interactive clusters:Definition of Interactive ...

  • 1 kudos
2 More Replies
TomDeas
by New Contributor II
  • 2020 Views
  • 2 replies
  • 2 kudos

Resolved! Resource Throttling; Large Merge Operation - Recent Engine Change?

Morning all, hope you can help as I've been stumped for weeks.Question: have there been recent changes to the Databricks query engine, or Photon (etc) which may impact large sort operations?I have a Jobs pipeline that runs a series of notebooks which...

runhistory.JPG query1.png query2.png query_peak.JPG
Data Engineering
MERGE
Performance Optimisation
Photon
Query Plan
serverless
  • 2020 Views
  • 2 replies
  • 2 kudos
Latest Reply
mark_ott
Databricks Employee
  • 2 kudos

There have indeed been recent changes to the Databricks query engine and Photon, especially during the June 2025 platform releases, which may influence how large sort operations and resource allocation are handled in SQL pipelines similar to yours. S...

  • 2 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels