cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
Self-Paced Learning Festival: 09 January - 30 January 2026

Grow Your Skills and Earn Rewards! Mark your calendar: January 09 – January 30, 2026 Join us for a three-week event dedicated to learning, upskilling, and advancing your career in data engineering, analytics, machine learning, and generative AI. ...

  • 51329 Views
  • 140 replies
  • 81 kudos
12-09-2025
🎤 Call for Presentations: Data + AI Summit 2026 is Open!

June 15–18, 2026 Are you building the future with data and AI? Then this is your moment. The Call for Proposals for Data + AI Summit 2026 is officially open, and we want to hear from builders, practitioners, and innovators across the data and AI com...

  • 4679 Views
  • 4 replies
  • 6 kudos
4 weeks ago
Level Up with Databricks Specialist Sessions

How to Register & Prepare If you're interested in advancing your skills with Databricks through a Specialist Session, here's a clear guide on how to register and what free courses you can take to prepare effectively. How to Begin Your Learning Path S...

  • 3725 Views
  • 2 replies
  • 9 kudos
10-02-2025
Solution Accelerator Series | Scale cybersecurity analytics with Splunk and Databricks

Strengthening cybersecurity isn’t a one-time project — it’s an ongoing process of adapting to new and complex threats. The focus is on modernizing your data infrastructure and analytics capabilities to make security operations smarter, faster, and mo...

  • 375 Views
  • 1 replies
  • 2 kudos
Tuesday
🎬 Databricks Community 2025 Highlights | A Year, Built Together

2025 wasn’t just another year on the Databricks Community.It was a year of showing up, helping each other, and building trust.  Watch the 2025 Community Wrap-Up This video is a small reflection of that journey.It highlights the key contributors of 2...

  • 541 Views
  • 12 replies
  • 17 kudos
Wednesday
🌟 Community Pulse: Your Weekly Roundup! December 22, 2025 – January 04, 2026

The calendar changed. The conversations didn’t. The year turned somewhere between a solved thread and a new question. No pause. No reset – just learning, continuing right where it left off. Here’s how it unfolded   Voices That Moved the Conversatio...

  • 155 Views
  • 2 replies
  • 4 kudos
Tuesday

Community Activity

jfvizoso
by > New Contributor II
  • 12892 Views
  • 6 replies
  • 0 kudos

Can I pass parameters to a Delta Live Table pipeline at running time?

I need to execute a DLT pipeline from a Job, and I would like to know if there is any way of passing a parameter. I know you can have settings in the pipeline that you use in the DLT notebook, but it seems you can only assign values to them when crea...

  • 12892 Views
  • 6 replies
  • 0 kudos
Latest Reply
Sudharsan
New Contributor II
  • 0 kudos

@DeepakAI : May I know, how you resolved it?

  • 0 kudos
5 More Replies
Phani1
by > Databricks MVP
  • 3125 Views
  • 8 replies
  • 0 kudos

Triggering DLT Pipelines with Dynamic Parameters

Hi Team,We have a scenario where we need to pass a dynamic parameter to a Spark job that will trigger a DLT pipeline in append mode. Can you please suggest an approach for this?Regards,Phani

  • 3125 Views
  • 8 replies
  • 0 kudos
Latest Reply
Sudharsan
New Contributor II
  • 0 kudos

@koji_kawamura : I have more or less the same scenario say I have 3 tables.The sources and targets are different but I would like to use a generic pipeline and pass in the source and target as a parameter and run them parallely. @sas30 : can you be m...

  • 0 kudos
7 More Replies
Jim_Anderson
by Databricks Employee
  • 511 Views
  • 1 replies
  • 2 kudos

FAQ for Databricks Learning Festival (Virtual): 09 January - 30 January 2026

General Q: How can I check whether I have completed the required modules? Login to your Databricks Customer Academy Account and select ‘My Activities’ within the user menu in the top left. Under the Courses tab, you can verify whether all courses wit...

  • 511 Views
  • 1 replies
  • 2 kudos
Latest Reply
willmusgrave
New Contributor II
  • 2 kudos

Hi Jim, I believe this post might have a typo. "The voucher will be valid for approximately 90 days and will expire before 30 January 2026. You must schedule and sit for the exam before this date. We strongly recommend booking the slots early, since ...

  • 2 kudos
RevanthV
by > New Contributor III
  • 10 Views
  • 0 replies
  • 0 kudos

Data validation with df writes using append mode

Hi Team,Recently i came across a situation where I had to write a huge data and it took 6 hrs to complete...later when i checked the target data , I saw 20% of the total records written incorrectly or corrupted because the source data itself was corr...

  • 10 Views
  • 0 replies
  • 0 kudos
Saad_Recruiter
by > Visitor
  • 7 Views
  • 0 replies
  • 0 kudos

Databricks Developer Atlanta GA

Hello, This is Saad from DS Technologies. I’m a Tech Recruiter connecting with professionals for current and upcoming technology opportunities.   Job Title: Databricks DeveloperLocation: Atlanta, GADuration: 18 to 24 months Contract Job Description:L...

  • 7 Views
  • 0 replies
  • 0 kudos
Saad_Recruiter
by > Visitor
  • 11 Views
  • 0 replies
  • 0 kudos

Databricks Developer Atlanta

Hello, This is Saad from DS Technologies. I’m a Tech Recruiter connecting with professionals for current and upcoming technology opportunities.   Job Title: Databricks DeveloperLocation: Atlanta, GADuration: 18 to 24 months Contract Job Description:L...

  • 11 Views
  • 0 replies
  • 0 kudos
ramsai
by > New Contributor
  • 30 Views
  • 4 replies
  • 1 kudos

Updating Job Creator to Service Principal

Regarding data governance best practices: I have jobs created by a user who has left the organization, and I need to change the job creator to a service principal. Currently, it seems the only option is to clone the job and update it. Is this the rec...

  • 30 Views
  • 4 replies
  • 1 kudos
Latest Reply
nayan_wylde
Esteemed Contributor
  • 1 kudos

In Databricks, the creator field for a job is immutable—it’s used as part of an audit trail, so you cannot change it once set. Databricks support confirms this, and their recommended workaround is to clone the job, which creates a new job under the c...

  • 1 kudos
3 More Replies
Hubert-Dudek
by Databricks MVP
  • 15 Views
  • 0 replies
  • 0 kudos

Runtime 18 / Spark 4.1 improvements

Runtime 18 / Spark 4.1 brings parameter markers everywhere #databricks Latest updates: read: https://databrickster.medium.com/databricks-news-week-1-29-december-2025-to-4-january-2025-432c6231d8b1 watch:https://www.youtube.com/watch?v=LLjoTkceKQI

param.png
  • 15 Views
  • 0 replies
  • 0 kudos
JoaoPigozzo
by > New Contributor III
  • 62 Views
  • 2 replies
  • 2 kudos

Unity Catalog design in single workspace: dev/prod catalogs and schemas for projects — should we add

Hello everyone,We are currently designing our Unity Catalog structure and would like feedback on whether our approach makes sense and how it could be improved.Context:We use a single Databricks workspace shared by Data Engineering and Data Science/ML...

  • 62 Views
  • 2 replies
  • 2 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 2 kudos

Hey @JoaoPigozzo  — great question. This one comes up all the time with the customers I train. I’ve been doing this for quite a while now and have had the chance to see a wide range of implementations and approaches out in the wild. While there’s no ...

  • 2 kudos
1 More Replies
Ved88
by > New Contributor II
  • 78 Views
  • 4 replies
  • 0 kudos

databricks all-purpose cluster

getting below error-Failure starting repl. Try detaching and re-attaching the notebook. while executing notebook and can see cluster have all installed lib.

  • 78 Views
  • 4 replies
  • 0 kudos
Latest Reply
nayan_wylde
Esteemed Contributor
  • 0 kudos

SQLNonTransientConnectionException with port 3306 strongly points to egress being blocked from your Databricks compute to the default Hive Metastore (which runs on Azure Database for MySQL). Databricks recently published the reserved IP ranges and gu...

  • 0 kudos
3 More Replies
ryojikn
by > New Contributor III
  • 1656 Views
  • 3 replies
  • 2 kudos

Model Serving - Shadow Deployment - Azure

Hey,I'm composing an architecture within the usage of Model Serving Endpoints and one of the needs that we're aiming to resolve is Shadow Deployment.Currently, it seems that the traffic configurations available in model serving do not allow this type...

  • 1656 Views
  • 3 replies
  • 2 kudos
Latest Reply
KaushalVachhani
Databricks Employee
  • 2 kudos

@ryojikn and @irtizak , you’re right. Databricks Model Serving allows splitting traffic between model versions, but it doesn’t have a true shadow deployment where live production traffic is mirrored to a new model for monitoring without affecting use...

  • 2 kudos
2 More Replies
Nivethan_Venkat
by Valued Contributor
  • 667 Views
  • 2 replies
  • 6 kudos

[PARTNER BLOG] Building an Agent-Native Data Quality Manager with Databricks Apps and DQX

Introduction: The Data Quality ParadoxWhat We BuiltThe Technology StackArchitecture Deep DiveThe Four-Layer ArchitectureLayer 1: UsersLayer 2: Databricks Apps PlatformLayer 3: Compute LayerLayer 4: Data LayerWhy This Design?The User Experience: A Com...

Nivethan_Venkat_0-1767562092725.png Nivethan_Venkat_0-1767563087238.png Nivethan_Venkat_0-1767563749696.png Nivethan_Venkat_2-1767563989250.png
  • 667 Views
  • 2 replies
  • 6 kudos
Latest Reply
akjain
Visitor
  • 6 kudos

this is very helpful but git repo links dont work.

  • 6 kudos
1 More Replies
Dhruv-22
by > Contributor II
  • 20 Views
  • 0 replies
  • 0 kudos

Feature request: Allow to set value as null when not present in schema evolution

I want to raise a feature request as follows.Currently, in the Automatic schema evolution for merge when a column is not present in the source dataset it is not changed in the target dataset. For e.g.%sql CREATE OR REPLACE TABLE edw_nprd_aen.bronze.t...

Dhruv22_0-1767970990008.png Dhruv22_1-1767971051176.png Dhruv22_2-1767971116934.png Dhruv22_3-1767971213212.png
  • 20 Views
  • 0 replies
  • 0 kudos
Om_Jha
by Databricks Employee
  • 71 Views
  • 0 replies
  • 1 kudos

Instructed Retriever: Unlocking System-Level Reasoning in Search Agents 🚀

Retrieval-based agents drive mission-critical enterprise workflows, but traditional RAG fails on complex constraints (e.g., recency, exclusions, source priority). Instructed Retriever is a retrieval architecture for the agent era that carries full sy...

Screenshot 2026-01-09 at 8.36.16 PM.png
  • 71 Views
  • 0 replies
  • 1 kudos
Dhruv-22
by > Contributor II
  • 33 Views
  • 2 replies
  • 1 kudos

BUG: Merge with schema evolution doesn't work in update clause

I am referring to this link of databricks documentation. Here is a screenshot of the same  According to the documentation the UPDATE command should work when the target table doesn't have the column but it is present in source. I tried the same with ...

Screenshot 2026-01-09 at 16.33.15.png Dhruv22_0-1767956896097.png
  • 33 Views
  • 2 replies
  • 1 kudos
Latest Reply
Dhruv-22
Contributor II
  • 1 kudos

Hi @iyashk-DBThanks for the response, it will help in resolving the issue.But, can you mark it as a bug and report it? Because specifying just the column without the table name is a little risky.

  • 1 kudos
1 More Replies
Welcome to the Databricks Community!

Once you are logged in, you will be ready to post content, ask questions, participate in discussions, earn badges and more.

Spend a few minutes exploring Get Started Resources, Learning Paths, Certifications, and Platform Discussions.

Connect with peers through User Groups and stay updated by subscribing to Events. We are excited to see you engage!

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Top Kudoed Authors
Read Databricks Data Intelligence Platform reviews on G2

Latest from our Blog

[PARTNER BLOG] Zerobus Ingest on Databricks

Introduction TL;DR ZeroBus Ingest is a serverless, Kafka-free ingestion service in Databricks that allows applications and IoT devices to stream data directly into Delta Lake with low latency and mini...

802Views 3kudos