cancel
Showing results for 
Search instead for 
Did you mean: 
Community Articles
Dive into a collaborative space where members like YOU can exchange knowledge, tips, and best practices. Join the conversation today and unlock a wealth of collective wisdom to enhance your experience and drive success.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

shubham_meshram
by Databricks Partner
  • 2653 Views
  • 1 replies
  • 1 kudos

When Did the Data Go Wrong? Using Delta Lake Time Travel for Investigation in Databricks

I. IntroductionData pipelines are the lifeblood of modern data-driven organizations. However, even the most robust pipelines can experience unexpected issues: data corruption, erroneous updates, or sudden data drops. When these problems occur, quickl...

shubham_meshram_0-1743459167949.png
  • 2653 Views
  • 1 replies
  • 1 kudos
Latest Reply
deepakachary9
New Contributor II
  • 1 kudos

Great thought to use delta time travel to determine when data drift starts!But this only works as long as retention policies allow it. With vacuum and stricter runtime enforcement in newer dbx versions, older snapshots may not be there when you need ...

  • 1 kudos
Louis_Frolio
by Databricks Employee
  • 644 Views
  • 7 replies
  • 6 kudos

What's Your Biggest AI Pet Peeve?

Hey Team, in my last post I asked how much AI has actually changed your day to day, and the responses were fantastic. But let's talk about the other side for a minute. I'll go first — I've started second-guessing almost everything I see on social med...

Screenshot 2026-03-30 at 2.26.48 PM.png
  • 644 Views
  • 7 replies
  • 6 kudos
Latest Reply
balajij8
Contributor III
  • 6 kudos

AI tools generate code & pipelines that work functionally but ignore efficiency, scalability & cloud implications. I have bumped into belowCode generated by AI does a SQL cross join as its simple for most natural language queries that works but kills...

  • 6 kudos
6 More Replies
Ashwin_DSA
by Databricks Employee
  • 1897 Views
  • 3 replies
  • 9 kudos

Databricks Multi-Table Transactions - Part 1

If you've ever worked on an insurance data warehouse, or really any warehouse where data arrives from different systems at different times, you know the pain of keeping things in sync. I spent years building data warehouses for a property and casual...

claim-wrapup-flow.png before-after-transactions.png Part1 Cover Pic.png
  • 1897 Views
  • 3 replies
  • 9 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 9 kudos

Link to Part 2   

  • 9 kudos
2 More Replies
Ashwin_DSA
by Databricks Employee
  • 404 Views
  • 0 replies
  • 2 kudos

Databricks Multi-Table Transactions - Part 2

In Part 1, we covered why multi-table transactions matter. Now let's build one. We'll create the tables from the claim wrap-up scenario, load sample P&C insurance data, and walk through what happens when the wrap-up succeeds, when it fails, and when...

s1-claim.png s1-wraplog.png s1-reserves.png s2-error.png
  • 404 Views
  • 0 replies
  • 2 kudos
Emil_Kaminski
by Contributor II
  • 17003 Views
  • 3 replies
  • 8 kudos

Materials to pass Databricks Data Engineering Associate Exam

Hi Guys, I have passed it already some time ago, but just recently have summarized all the materials which helped me to do it. Pay special attention to GitHub repository, which contains many great exercises prepared by Databricks teamhttps://youtu.be...

  • 17003 Views
  • 3 replies
  • 8 kudos
Latest Reply
Max_John
New Contributor III
  • 8 kudos

Cleared Databricks Data Engineering Associate recently. Practising real questions helped me a lot, and (Certs Topic) was a reliable resource.

  • 8 kudos
2 More Replies
Ashwin_DSA
by Databricks Employee
  • 639 Views
  • 1 replies
  • 3 kudos

Solving Multi-Dimension Analytics in Databricks Dashboards with Views and Metric Views

If you've ever built a dashboard where you needed to track the same data across two different date dimensions, you know the frustration. You get the first chart working. You add the second. Then you realise cross-filtering just stopped working. I re...

sample-data.png naive_combined.png view-creation.png view-cumulative.png
  • 639 Views
  • 1 replies
  • 3 kudos
Latest Reply
Nidhig
Databricks Partner
  • 3 kudos

Thanks for sharing great example with detailed explanation.

  • 3 kudos
Kirankumarbs
by Contributor III
  • 696 Views
  • 0 replies
  • 1 kudos

How to actually get job_id and run_id in a Databricks Python wheel task (Avoid Hallucinations)

We needed job_id and run_id in a custom metrics Delta table so we could join to `system.lakeflow.job_run_timeline`. Tried four approaches before finding the one that works on serverless compute.What doesn't workspark.conf.get("spark.databricks.job.id...

  • 696 Views
  • 0 replies
  • 1 kudos
emma_s
by Databricks Employee
  • 485 Views
  • 0 replies
  • 5 kudos

Create an MCP for Azure DevOps To Use With Genie Code

Overview Prompted by a customer question, I wanted to see what was possible in terms of MCP integration into Genie Code, in order to try this out I decided to look at Azure Dev Ops, as it's a common workflow to want to see your tickets alongside the ...

Screenshot 2026-03-25 at 15.55.10.png
Community Articles
azure devops
Genie Code
MCP
  • 485 Views
  • 0 replies
  • 5 kudos
Yogesh_Verma_
by Contributor II
  • 1003 Views
  • 3 replies
  • 1 kudos

PostgreSQL to Databricks made simpler with Lakeflow Connect(Public Preview)

PostgreSQL to Databricks made simpler with Lakeflow Connect (Public Preview).Databricks has introduced a PostgreSQL connector in Lakeflow Connect (Public Preview), enabling ingestion of PostgreSQL data into the Lakehouse using logical replication.Ins...

databricks-lakeflow-connect-og.png
  • 1003 Views
  • 3 replies
  • 1 kudos
Latest Reply
balajij8
Contributor III
  • 1 kudos

You can enable or disable previews using account console for account level previews and workspace for workspace level previews.More details here

  • 1 kudos
2 More Replies
balajij8
by Contributor III
  • 455 Views
  • 0 replies
  • 1 kudos

Turning Lakehouse into Brainhouse via Knowledge Graphs

Organizations solved the challenge of collecting, cleaning & governing structured data at scale via Delta Lake and Unity Catalog in Lakehouse. You have world class lineage, permissions, RBAC, ABAC and schemas as the nervous system. The nervous system...

  • 455 Views
  • 0 replies
  • 1 kudos
Gaurav11
by Databricks Partner
  • 654 Views
  • 2 replies
  • 1 kudos

Building a Large-Scale Supply Chain Simulation Platform on Databricks

A Data & AI–Driven Decision Engine for Modern Retail NetworksIntroductionIn modern retail, supply chains are no longer static networks — they are living, adaptive systems that must continuously respond to customer demand, fulfillment speed expectatio...

  • 654 Views
  • 2 replies
  • 1 kudos
Latest Reply
StaniGora
New Contributor II
  • 1 kudos

Great article! would love to know more as I have a very similar case with a concrete customer. Thanks, S. 

  • 1 kudos
1 More Replies
Ale_Armillotta
by Valued Contributor II
  • 490 Views
  • 0 replies
  • 1 kudos

It's time to treat AI as a peer, not a tool. What if your AI already knew Databricks?

We need to stop treating AI as a tool. It's time to treat it as a peer.I've been building a library of reusable skills for Claude — structured instructions that let AI agents handle complex, repetitive development workflows on Databricks and Azure AI...

  • 490 Views
  • 0 replies
  • 1 kudos
mou
by Databricks Partner
  • 507 Views
  • 0 replies
  • 1 kudos

I Tried Building an Agentic AI System for Construction. Here’s What Actually Worked.

Most construction teams don’t really have a data problem, at least not in the way we usually think about it. They already have dashboards everywhere. Finance has reports, project managers have schedule views, field teams have inspection logs. Everyon...

  • 507 Views
  • 0 replies
  • 1 kudos
Labels