cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Mohit_m
by Databricks Employee
  • 43780 Views
  • 4 replies
  • 4 kudos

Resolved! How to get the Job ID and Run ID and save into a database

We are having Databricks Job running with main class and JAR file in it. Our JAR file code base is in Scala. Now, when our job starts running, we need to log Job ID and Run ID into the database for future purpose. How can we achieve this?

  • 43780 Views
  • 4 replies
  • 4 kudos
Latest Reply
Kirankumarbs
Contributor III
  • 4 kudos

i came across a similar requirement and got it through named parameters. Wrote a community blog about it

  • 4 kudos
3 More Replies
IM_01
by Contributor III
  • 1261 Views
  • 11 replies
  • 6 kudos

Resolved! OrderBy is not sorting the results

Hi,I am currently using Lakeflow SDP ,firstly I am creating 2 views and then joining them and creating materialized view and using order by in the materialized view create function , but the results are not sorted does order by not work on materializ...

  • 1261 Views
  • 11 replies
  • 6 kudos
Latest Reply
IM_01
Contributor III
  • 6 kudos

Thanks Ashwin

  • 6 kudos
10 More Replies
IM_01
by Contributor III
  • 434 Views
  • 3 replies
  • 0 kudos

Structured streaming error- NON_TIME_WINDOW_NOT_SUPPORTED_IN_STREAMING

Hi,I was using window function row_number(),min,sum in the code, then the Lakeflow SDP pipeline was failing with the error: NON_TIME_WINDOW_NOT_SUPPORTED_IN_STREAMING - Window function is not supported on streaming dataframeswhat is the recommended a...

  • 434 Views
  • 3 replies
  • 0 kudos
Latest Reply
IM_01
Contributor III
  • 0 kudos

@Louis_Frolio  suppose if I use foreachbatch I might end up with duplicates as the state is not maintainedcan you please share more information on max_by

  • 0 kudos
2 More Replies
TheBeacon
by New Contributor II
  • 2513 Views
  • 5 replies
  • 2 kudos

Exploring Postman Alternatives for API Testing in VSCode?

Has anyone here explored Postman alternatives within VSCode? I’ve seen mentions of Thunder Client and Apidog. Would love to know if they offer a smoother integration or better functionality.

  • 2513 Views
  • 5 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

I may be old fashioned but curl is the only valid answer IMO

  • 2 kudos
4 More Replies
yit337
by Contributor
  • 276 Views
  • 2 replies
  • 1 kudos

Is it required to run Lakeflow Connect on Serverless?

As the subject states, my question is:Is it required to run the Ingestion Pipeline in Lakeflow Connect on Serverless compute? Cause I try to define my own cluster in the DAB, but it raises an error:`Error: cannot create pipeline: You cannot provide c...

  • 276 Views
  • 2 replies
  • 1 kudos
Latest Reply
saurabh18cs
Honored Contributor III
  • 1 kudos

Yes — Lakeflow Connect ingestion pipelines always run on Serverless compute. Databricks overrides your compute config and switches back to serverless,because the ingestion connector requires it.     

  • 1 kudos
1 More Replies
bhargavabasava
by New Contributor III
  • 948 Views
  • 2 replies
  • 1 kudos

Support for JDBC writes from serverless compute

Hi team,Are there any plans in place to support JDBC writes using serverless compute.

  • 948 Views
  • 2 replies
  • 1 kudos
Latest Reply
CarlosPH
Databricks Partner
  • 1 kudos

Hello! And what is the standard way to write to a external database through databricks? general purpose compute?Thanks very much.

  • 1 kudos
1 More Replies
JIWON
by New Contributor III
  • 449 Views
  • 2 replies
  • 3 kudos

Resolved! Questions on Auto Loader auto Listing Logic

Hi everyone,I’m investigating some performance patterns in our Auto Loader (S3) pipelines and would like to clarify the internal listing logic.Context: We run a batch job every hour using Auto Loader. Recently, after March 10th, we noticed our execut...

  • 449 Views
  • 2 replies
  • 3 kudos
Latest Reply
aleksandra_ch
Databricks Employee
  • 3 kudos

Hi @JIWON , 1. There is no such option; 2. Assuming that the job is triggered every hour, the spikes every 8-hours can be explained by this: To ensure eventual completeness of data in auto mode, Auto Loader automatically triggers a full directory lis...

  • 3 kudos
1 More Replies
jacovangelder
by Databricks MVP
  • 4921 Views
  • 4 replies
  • 10 kudos

How do you define PyPi libraries on job level in Asset Bundles?

Hello,Reading the documentation, it does not state it is possible to define libraries on job level instead of on task level. It feels really counter-intuitive putting libraries on task level in Databricks workflows provisioned by Asset Bundles. Is th...

  • 4921 Views
  • 4 replies
  • 10 kudos
Latest Reply
jacovangelder
Databricks MVP
  • 10 kudos

Thanks @Witold ! Thought so. I decided to go with an init script where I install my dependencies rather than installing libraries. For future reference, this is what it looks like:job_clusters: - job_cluster_key: job_cluster new_cluster: ...

  • 10 kudos
3 More Replies
zenwanderer
by New Contributor II
  • 516 Views
  • 4 replies
  • 0 kudos

Kill/Cancel a Notebook Cell Running Too Long on an All-purpose Cluster

Hi everyone, I’m facing an issue when running a notebook on a Databricks All-purpose cluster. Some of my cells/pipelines run for a very long time, and I want to automatically cancel/kill them when they exceed a certain time limit.I tried setting spar...

  • 516 Views
  • 4 replies
  • 0 kudos
Latest Reply
MoJaMa
Databricks Employee
  • 0 kudos

@zenwanderer Have you looked into Query Watchdog? For Classic All-Purpose clusters this might be your best bet. https://docs.databricks.com/aws/en/compute/troubleshooting/query-watchdog

  • 0 kudos
3 More Replies
guidotognini
by New Contributor II
  • 432 Views
  • 2 replies
  • 2 kudos

Resolved! Rename Column Name of Streaming Table in Lakeflow Spark Declarative Pipeline

Hi, I would like to know if it is possible to rename the name of a column of a streaming table defined in Lakeflow Spark Declarative Pipeline without having to run a Full Refresh. Could u give me any ideas on how I can achieve this?

  • 432 Views
  • 2 replies
  • 2 kudos
Latest Reply
balajij8
Contributor III
  • 2 kudos

You canUpdate pipeline code to rename old column & trigger a Incremental Update (old_column and new_column exists after it)Old data will have NULL for new_column after Incremental Update. Update the table to fill new_column for such cases from old_co...

  • 2 kudos
1 More Replies
AnilKumarM
by New Contributor
  • 539 Views
  • 3 replies
  • 1 kudos

Best-practice structure for config.yaml, utils, and databricks.yaml in ML project (Databricks)

Hi everyone, I’m working on an ML project in Databricks and want to design a clean, scalable, and production-ready project structure. I’d really appreciate guidance from those with real-world experience.  My Requirement I need to organize my project ...

  • 539 Views
  • 3 replies
  • 1 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 1 kudos

Hi @AnilKumarM, Agree with @-werners- here. There isn’t a single 'one true' repo layout we mandate, but there are a few public references that show the patterns Databricks recommends. For bundles/databricks.yml + multi‑env, you may want to check the ...

  • 1 kudos
2 More Replies
maikel
by Contributor II
  • 1089 Views
  • 5 replies
  • 1 kudos

Resolved! SQL schemas migration

Hello Community!I would like to ask for your recommendation in terms of SQL schemas migration best practice. In our project, currently we have different SQL schemas definition and data seeding saved in SQL files. Since we are going to higher environm...

  • 1089 Views
  • 5 replies
  • 1 kudos
Latest Reply
maikel
Contributor II
  • 1 kudos

@anuj_lathi and @Louis_Frolio thank you very much! This is really great approach and example! 

  • 1 kudos
4 More Replies
js5
by New Contributor II
  • 444 Views
  • 1 replies
  • 0 kudos

Resolved! UNSUPPORTED_TIME_TYPE despite 18.1 runtime?

Hello,I have tried using TimeType data type which is supported since Spark 4.1:https://spark.apache.org/docs/latest/sql-ref-datatypes.htmlI am unfortunately still getting UNSUPPORTED_TIME_TYPE error when trying to run display() on a pandas dataframe ...

  • 444 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 0 kudos

Hi @js5, This is expected today on Databricks. You can check this out for reference. Spark 4.1 introduces a standard TIME type (TimeType) in the SQL type system, and Databricks runtimes based on Spark 4.x already expose it at the engine level (for ex...

  • 0 kudos
malterializedvw
by New Contributor III
  • 980 Views
  • 8 replies
  • 3 kudos

Parametrizing queries in DAB deployments

Hi folks,I would like to ask for best practises concerning the topic of parametrizing queries in Databricks Asset Bundle deployments.This topic is relevant to differentiate between deployments on different environments as well as [dev]-deployments vs...

  • 980 Views
  • 8 replies
  • 3 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 3 kudos

Hm, the IDENTIFIER({{var}} || string) should work for create statements with DAB.I also spent way too much time on AI giving me wrong answers (Jinja templating format on the first place).Mind that there are no spaces in {{var}}.BUT there are some lim...

  • 3 kudos
7 More Replies
Neelimak
by Databricks Partner
  • 717 Views
  • 5 replies
  • 3 kudos

Resolved! ingestion pipeline configuration

When trying to create a ingestion pipelines, auto generated cluster is hitting quota limit errors. The type of vm its trying to use is not available in our region and there seems no way to add fallback to different types of vms. Can you please help h...

  • 717 Views
  • 5 replies
  • 3 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 3 kudos

Hi @Neelimak, Thanks for the feedback. I've now passed the feedback to our product team.   

  • 3 kudos
4 More Replies
Labels