cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

jar
by Contributor
  • 470 Views
  • 1 replies
  • 0 kudos

Excluding job update from DAB .yml deployment

Hi.We have a range of scheduled jobs and _one_ continuous job all defined in .yml and deployed with DAB. The continuous job is paused per default and we use a scheduled job of a notebook to pause and unpause it so that it only runs during business ho...

  • 470 Views
  • 1 replies
  • 0 kudos
Latest Reply
Yogesh_Verma_
Contributor II
  • 0 kudos

You’re running into this because DAB treats the YAML definition as the source of truth — so every time you redeploy, it will reset the job state (including the paused/running status) back to what’s defined in the file. Unfortunately, there isn’t curr...

  • 0 kudos
karthik_p
by Databricks Partner
  • 16498 Views
  • 5 replies
  • 1 kudos

does delta live tables supports identity columns

we are able to test identity columns using sql/python, but when we are trying same using DLT, we are not seeing values under identity column. it is always empty for coloumn we created "id BIGINT GENERATED ALWAYS AS IDENTITY" 

  • 16498 Views
  • 5 replies
  • 1 kudos
Latest Reply
Gowrish
New Contributor II
  • 1 kudos

Hi,i see from the following databricks documentaion - https://docs.databricks.com/aws/en/dlt/limitationsit states the following which kind of giving an impression that you can define identity column to a steaming table Identity columns might be recom...

  • 1 kudos
4 More Replies
mtreigelman
by New Contributor III
  • 704 Views
  • 1 replies
  • 3 kudos

First Lakeflow (DLT) Pipeline Best Practice Question

Hi, I am writing my first streaming pipeline and trying to ensure it is setup to work as a "Lakeflow" pipeline.  It is connecting an external Oracle database with some external Azure Blob storage data (all managed in the same Unity Catalog). The pipe...

  • 704 Views
  • 1 replies
  • 3 kudos
Latest Reply
BS_THE_ANALYST
Databricks Partner
  • 3 kudos

@mtreigelmanthanks for providing the update. If you wouldn't mind, could you explain why you think the first way didn't work and why the second way did? Then you can mark your response as a solution to the question .I found this article to be useful ...

  • 3 kudos
ck7007
by Contributor II
  • 720 Views
  • 1 replies
  • 2 kudos

Cost

Reduced Monthly Databricks Bill from $47K to $12.7KThe Problem: We were scanning 2.3TB for queries needing only 8GB of data.Three Quick Wins1. Multi-dimensional Partitioning (30% savings)# Beforedf.write.partitionBy("date").parquet(path)# After-parti...

  • 720 Views
  • 1 replies
  • 2 kudos
Latest Reply
BS_THE_ANALYST
Databricks Partner
  • 2 kudos

@ck7007 thanks so much for sharing! That's such a saving, by the way. Congrats.Out of curiosity, did you consider using Liquid Clustering which was meant to replace partitioning and z-order: https://docs.databricks.com/aws/en/delta/clustering I found...

  • 2 kudos
AbhishekNakka15
by Databricks Partner
  • 698 Views
  • 1 replies
  • 1 kudos

Resolved! Unable to login to partner account

When I try to login with my office email to the partner acccount. It says, The service is currently unavailable. Please try again later. It says "You are not authorized to access https://partner-academy.databricks.com. Please select a platform you ca...

  • 698 Views
  • 1 replies
  • 1 kudos
Latest Reply
Advika
Community Manager
  • 1 kudos

Hello @AbhishekNakka15! Please raise a ticket with the Databricks Support Team, and include your email address so they can review your account and provide further assistance.

  • 1 kudos
viralpatel
by New Contributor II
  • 1217 Views
  • 2 replies
  • 1 kudos

Lakebridge Synapse Conversion to DBX and Custom transpiler

I have 2 questions about Lakebridge solution,Synapse with dedicated pool ConversionWe were conducting a PoC for Synapse to DBX migration using Lakebridge. What we have observed is that the conversions are not correct. I was anticipating all tables wi...

  • 1217 Views
  • 2 replies
  • 1 kudos
Latest Reply
yourssanjeev
Databricks Partner
  • 1 kudos

We are also checking on this use case but got it confirmed from Databricks that it does not work for this use case yet, not sure whether it is in their roadmap

  • 1 kudos
1 More Replies
vishalv4476
by New Contributor III
  • 609 Views
  • 1 replies
  • 0 kudos

Databricks job runs failures Py4JJavaError: An error occurred while calling o404.sql. : java.util.No

Hi ,We had a successful running pipeline but it started failing since 20th august , no change were published. Can you please guide me resolve this issue.I've tried increasing delta.deletedFileRetentionDuration' = 'interval 365 days' but it didn't hel...

  • 609 Views
  • 1 replies
  • 0 kudos
Latest Reply
SP_6721
Honored Contributor II
  • 0 kudos

Hi @vishalv4476 ,The error is likely due to a corrupted Delta transaction log or files deleted manually/outside of Delta. Check the table history and verify that no user or automated process removed data files. If issues are found, restore the table ...

  • 0 kudos
anazen13
by New Contributor III
  • 1784 Views
  • 9 replies
  • 2 kudos

databricks api to create a serverless job

I am trying to follow your documentation on how to create serverless job via API https://docs.databricks.com/api/workspace/jobs/create#environments-spec-environment_version So i see that sending the json request resulted for me to see serverless clus...

  • 1784 Views
  • 9 replies
  • 2 kudos
Latest Reply
siennafaleiro
New Contributor II
  • 2 kudos

It looks like you’re hitting one of the current limitations of Databricks serverless jobs. Even though the API supports passing an environments object, only certain fields are honored right now. In particular:The environment_version parameter will de...

  • 2 kudos
8 More Replies
zero234
by New Contributor III
  • 7301 Views
  • 3 replies
  • 1 kudos

i have created a materialized view table using delta live table pipeline and its not appending data

i have created a materialized view table using delta live table pipeline , for some reason it is overwriting data every day , i want it to append data to the table instead of doing full refresh suppose i had 8 million records in table and if irun the...

  • 7301 Views
  • 3 replies
  • 1 kudos
Latest Reply
UMAREDDY06
New Contributor II
  • 1 kudos

[expect_table_not_view.no_alternative] 'insert' expects a table but dim_airport_unharmonised is a view can you please help how to reslove this.thanksuma devi

  • 1 kudos
2 More Replies
ManojkMohan
by Honored Contributor II
  • 749 Views
  • 1 replies
  • 2 kudos

Best practices : Silver Layer to Salesforce

Need community view to evaluate my solution based best practice                                                                         Problem i am solving is reading match data from a CSV, this was uploaded into a volume , then i  clean and transfo...

Data Engineering
Bestpractice
  • 749 Views
  • 1 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

- skip the pandas conversion- persist the transformed data in a databricks table and then write to salesforce.

  • 2 kudos
seefoods
by Valued Contributor
  • 1875 Views
  • 10 replies
  • 4 kudos

Resolved! sync delta table to Nosql

Hello Guys,Whats is best way to build sync process which sync data for two engine database like delta table and Nosql table ( Mongo) ?Thanx Cordially, 

  • 1875 Views
  • 10 replies
  • 4 kudos
Latest Reply
nayan_wylde
Esteemed Contributor II
  • 4 kudos

The other option I can think of is change streams. Here is a blogpost on it.https://contact-rajeshvinayagam.medium.com/mongodb-changestream-spark-delta-table-an-alliance-a70962133b95 

  • 4 kudos
9 More Replies
collierd
by New Contributor III
  • 2090 Views
  • 7 replies
  • 5 kudos

Resolved! timestamp date filter does not work

HelloI have a column called LastUpdated defined as timestampIf I select from the table it displays as (e.g.) 2025-08-27T10:50:31.610+00:00How do I filter on this without having to be specific with the year, month, day, ... This does not work:select *...

  • 2090 Views
  • 7 replies
  • 5 kudos
Latest Reply
Pilsner
Databricks Partner
  • 5 kudos

Hello @collierd ,The way I would tackle this would involve data time specifiers. Because your value is likely stored as a timestamp which you can see via the catalog explorer, you cannot compare it to a string value such as "2025-08-27T10:50:31.610+0...

  • 5 kudos
6 More Replies
ManojkMohan
by Honored Contributor II
  • 787 Views
  • 3 replies
  • 1 kudos

Resolved! Silver layer to Salesforce - Need Help Debugging - IllegalArgumentException: Secret does not exist

I have ingested raw data Converted into Bronze TableSubsequently have saved the DataFrame as a Delta table in the 'silver' schemaAS part of sending data from silvertable to salesforceInstall & authenticate the Databricks CLI - DoneCreate the secret s...

ManojkMohan_0-1755889455988.png ManojkMohan_1-1755889481679.png ManojkMohan_2-1755889722106.png
  • 787 Views
  • 3 replies
  • 1 kudos
Latest Reply
ManojkMohan
Honored Contributor II
  • 1 kudos

@szymon_dybczak  Resolved it now it I had to use commands specific to Databricks CLI v0.265.0  

  • 1 kudos
2 More Replies
Anubhav2011
by New Contributor II
  • 699 Views
  • 1 replies
  • 0 kudos

Static Table Creation in DLT

We're encountering a specific issue in our DLT pipeline and would appreciate some advice. Here's an example to illustrate the challenge we're facing:Tables OverviewMaterial Master: Contains comprehensive material data updated daily with new records. ...

  • 699 Views
  • 1 replies
  • 0 kudos
Latest Reply
ilir_nuredini
Honored Contributor
  • 0 kudos

Hello @Anubhav2011 ,From your question, it means that you want the output to appear in the Catalog UI as an actual Table, not a Materialized View (MV). In DLT, datasets derived from other DLT datasets are shown as MVs (or Streaming Table). They’re st...

  • 0 kudos
Travis84
by New Contributor II
  • 1266 Views
  • 4 replies
  • 3 kudos

Can I get more details on the performance differences between pyodbc and SQL Connector for Python?

This article (Connect Python and pyodbc to Databricks | Databricks on AWS) states the following"However pyodbc may have better performance when fetching queries results above 10 MB."This is a bit vague. The word "may" implies "maybe not". Also, "bett...

  • 1266 Views
  • 4 replies
  • 3 kudos
Latest Reply
WiliamRosa
Databricks Partner
  • 3 kudos

Hi @Travis84, Hi,I came across an article that might help you, which makes the following comparison:A blog on high-bandwidth connections using Databricks’ Cloud Fetch optimization (leveraging parallel data transfer via pre-signed URLs) reported up to...

  • 3 kudos
3 More Replies
Labels