cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

brickster_2018
by Databricks Employee
  • 2440 Views
  • 2 replies
  • 0 kudos

Resolved! I do not have any Spark jobs running, but my cluster is not getting auto-terminated.

The cluster is Idle and there are no Spark jobs running on the Spark UI. Still I see my cluster is active and not getting terminated.

  • 2440 Views
  • 2 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

Databricks cluster is treated as active if there are any spark or non-Spark operations running on the cluster. Even though there are no Spark jobs running on the cluster, it's possible to have some driver-specific application code running marking th...

  • 0 kudos
1 More Replies
fundat
by New Contributor II
  • 137 Views
  • 2 replies
  • 2 kudos

Resolved! Course - Introduction to Apache Spark

Hi,In the course Introduction to Apache Spark; according to Apache Spark Runtime Architecture; Page 6 of 15. It says that :The cluster manager allocates resources and assigns tasks......Workers perform tasks assigned by the driverCan you help me plea...

fundat_3-1761596488970.png
  • 137 Views
  • 2 replies
  • 2 kudos
Latest Reply
BS_THE_ANALYST
Esteemed Contributor III
  • 2 kudos

Hi @fundat Perhaps the picture is useful here:Give this blog a read, I think this will answer some of your questions: https://medium.com/@knoldus/understanding-the-working-of-spark-driver-and-executor-4fec0e669399 .All the best,BS

  • 2 kudos
1 More Replies
jigar191089
by New Contributor III
  • 4792 Views
  • 12 replies
  • 0 kudos

Multiple concurrent jobs using interactive cluster

Hi All,I have notebook in Databricks. This notebook is executed from azure datafactory pipeline having a databricks notebook activity with linkedservice connected to an interactive cluster.When multiple concurrent runs of this pipeline are created, I...

Data Engineering
azure
Databricks
interactive cluster
  • 4792 Views
  • 12 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

Greetings @jigar191089 , I did some digging and here are some ideas to think about.   This smells like a shared-state/import-path issue on an interactive cluster under concurrency.   What likely happened Your notebook imports Python modules from /dbf...

  • 0 kudos
11 More Replies
jano
by New Contributor III
  • 9 Views
  • 0 replies
  • 0 kudos

DABs with multi github sources

I want to deploy a dabs that has dev using a github branch and prod using a github release tag. I can't seem to find a way to make this part dynamic based on the target. Things I've tried:- Setting the git varaible in the databricks.yml- making the g...

  • 9 Views
  • 0 replies
  • 0 kudos
biancadoesdata1
by New Contributor II
  • 19 Views
  • 0 replies
  • 0 kudos

Webinars

Hi! My colleagues and I at Unifeye are hosting a series of regular webinars focused on Databricks content. In November, we’re running four sessions covering Geospatial, Governance, AI, and Delta Sharing, featuring Databricks architects as guest speak...

  • 19 Views
  • 0 replies
  • 0 kudos
Hsn
by Visitor
  • 52 Views
  • 3 replies
  • 1 kudos

Suggest about data engineer

Hey, I'm Hasan Sayyed, currently pursuing SYBCA. I want to become a Data Engineer, but as a beginner, I’ve wasted some time learning other languages and technologies due to a lack of proper knowledge about this field. If someone could guide and teach...

  • 52 Views
  • 3 replies
  • 1 kudos
Latest Reply
biancadoesdata1
New Contributor II
  • 1 kudos

Hi Hasan. Great to see your motivation! Here’s a good way to start your journey into data engineering:Master SQL, it’s the foundation of everything in data.Enroll in the Databricks Academy (free) and take the beginner courses like “Get Started with D...

  • 1 kudos
2 More Replies
mkwparth
by New Contributor III
  • 59 Views
  • 2 replies
  • 1 kudos

Resolved! DLT | Communication lost with driver | Cluster was not reachable for 120 seconds

Hey Community, I'm facing this error, It says that "com.databricks.pipelines.common.errors.deployment.DeploymentException: Communication lost with driver. Cluster 1030-205818-yu28ft9s was not reachable for 120 seconds" This issue occurred in producti...

mkwparth_0-1761892686441.png
  • 59 Views
  • 2 replies
  • 1 kudos
Latest Reply
nayan_wylde
Esteemed Contributor
  • 1 kudos

This is actually a known intermittent issue in Databricks, particularly with streaming or Delta Live Tables (DLT) pipelines.This isn’t a logical failure in your code — it’s an infrastructure-level timeout between the Databricks control plane and the ...

  • 1 kudos
1 More Replies
CaptainJack
by New Contributor III
  • 60 Views
  • 1 replies
  • 0 kudos

Pull workspace url and workspace name using databricks-sdk / programaticaly in notebook

1. How could I pull workspace url (https://adb-XXXXX.XX.....net) 2. How could I get workspace name visible in top right corner.I know that easies solution is dbutils.notebook.entry_point.... browserHostName but unfortunetly it is not working in job c...

  • 60 Views
  • 1 replies
  • 0 kudos
Latest Reply
AbhaySingh
Databricks Employee
  • 0 kudos

Can you give this a shot? Not sure if you've a hard requirement of using SDK.  workspace_url = spark.conf.get('spark.databricks.workspaceUrl') Getting name is more tricky. You could potentially get it from tags if there is a tagging strategy in place...

  • 0 kudos
deano2025
by New Contributor II
  • 49 Views
  • 1 replies
  • 0 kudos

Databricks asset bundles CI/CD design for github actions

We are wanting to use Databricks asset bundles and deploy code changes and tests using github actions. We have seen lots of content online, but nothing concrete on how this is done at scale. So I'm wondering, if we have many changes and therefore man...

Data Engineering
asset bundles
  • 49 Views
  • 1 replies
  • 0 kudos
Latest Reply
AbhaySingh
Databricks Employee
  • 0 kudos

Have you read about following approach before?    Repository Structure Options     1. Monorepo with Multiple Bundles     repo-root/   ├── .github/   │   └── workflows/   │       ├── bundle-ci.yml   │       └── bundle-deploy.yml   ├── bundles/   │   ├...

  • 0 kudos
Mathias_Peters
by Contributor II
  • 17 Views
  • 0 replies
  • 0 kudos

Reading MongoDB collections into an RDD

Hi, for a Spark job which does some custom computation, I need to access data from a MongoDB collection and access the elements as of type Document. The reason for this is, that I want to apply some custom type serialization which is already implemen...

  • 17 Views
  • 0 replies
  • 0 kudos
JanFalta
by New Contributor
  • 42 Views
  • 1 replies
  • 0 kudos

Data Masking

Hi all,I need some help on this masking problem. If you create a view with used masking function based on table.The user reading this view has to have read access to underlying table. So theoretically, he can access unmasked data in the table.I would...

  • 42 Views
  • 1 replies
  • 0 kudos
Latest Reply
AbhaySingh
Databricks Employee
  • 0 kudos

Are you on Unity catalog?  Databricks has a solution for this through Unity Catalog Column Masking (also called Dynamic Views or Column-Level Security). https://learn.microsoft.com/en-us/azure/databricks/data-governance/unity-catalog/filters-and-mask...

  • 0 kudos
bhawana-pandey
by New Contributor III
  • 55 Views
  • 1 replies
  • 0 kudos

Looking for reference DABs bundle yaml and resources for Databricks app deployment (FastAPI redirect

Looking for example databricks.yml and bundle resources for deploying a FastAPI Databricks app using DABs from one environment to another. Deployment works but FastAPI redirects to localhost after deployment, though the homepage loads fine. Need refe...

  • 55 Views
  • 1 replies
  • 0 kudos
Latest Reply
AbhaySingh
Databricks Employee
  • 0 kudos

This is a great place to start: https://apps-cookbook.dev/resources/ Happy to answer specifics as they come after you've reviewed that resource. 

  • 0 kudos
aonurdemir
by Contributor
  • 116 Views
  • 2 replies
  • 4 kudos

Resolved! Broken s3 file paths in File Notifications for auto loader

Suddenly at "2025-10-23T14:12:48.409+00:00", coming file paths from file notification queue started to be urlencoded. Hence, our pipeline gets file not found exception. I think something has changed suddenly and broke notification system. Here are th...

  • 116 Views
  • 2 replies
  • 4 kudos
Latest Reply
K_Anudeep
Databricks Employee
  • 4 kudos

Hello @aonurdemir, Could you please re-run your pipeline now and check? This issue should be mitigated now. It is due to a recent internal bug that led to the unexpected handling of file paths with special characters. You should set ignoreMissingFile...

  • 4 kudos
1 More Replies
kfoster
by Contributor
  • 5922 Views
  • 8 replies
  • 7 kudos

Azure DevOps Repo - Invalid Git Credentials

I have a Repo in Databricks connected to Azure DevOps Repositories.The repo has been working fine for almost a month, until last week. Now when I try to open the Git settings in Databricks, I am getting "Invalid Git Credentials". Nothing has change...

  • 5922 Views
  • 8 replies
  • 7 kudos
Latest Reply
klaas
New Contributor II
  • 7 kudos

I had a similar problem. I could fix following these steps:in the Azure Devops repository: User Settings -> Personal access tokens  -> + New tokenin Databricks: Settings -> User -> Linked accounts -> Azure Devops (Personal access token)You could also...

  • 7 kudos
7 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels