cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

sandy311
by New Contributor III
  • 2654 Views
  • 3 replies
  • 1 kudos

Install python packages on serverless compute in DLT pipelines (using asset bundles)

Has anyone figured out how to install packages on serverless compute using asset bundle,s similar to how we handle it for jobs or job tasks?I didn’t see any direct option for this, apart from installing packages manually within a notebook.I tried ins...

Data Engineering
DLT Serverless
  • 2654 Views
  • 3 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

Installing Python packages on Databricks serverless compute via asset bundles is possible, but there are some unique limitations and required configuration adjustments compared to traditional jobs or job tasks. The core methods to install packages fo...

  • 1 kudos
2 More Replies
saicharandeepb
by New Contributor III
  • 2169 Views
  • 1 replies
  • 0 kudos

Implementing ADB Autoloader with Managed File Notification Mode for UC Ext Location (public preview)

Hi everyone,I'm planning to implement Azure Databricks Auto Loader using the Databricks-managed file notification mode for an external location registered in Unity Catalog. I understand this feature is currently in public preview, and I’d love to hea...

  • 2169 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Yes, Azure Databricks Auto Loader with Databricks-managed file notification mode for external locations in Unity Catalog has been successfully implemented by users, especially since it entered public preview in 2025, and it's designed to make file di...

  • 0 kudos
tbailey
by New Contributor II
  • 2515 Views
  • 3 replies
  • 1 kudos

DABs, policies and cluster pools

My scenario,A policy called 'Job Pool', which has the following overrides:"instance_pool_id": { "type": "unlimited", "hidden": true }, "driver_instance_pool_id": { "type": "unlimited", "hidden": true }I have an asset bundle that sets a new cluster as...

  • 2515 Views
  • 3 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

You are experiencing validation errors assigning a driver to an on-demand pool and workers to a spot pool in your Databricks Asset Bundle (DAB) configuration because the 'spot_bid_max_price' attribute is being forced by policies—even when the pools a...

  • 1 kudos
2 More Replies
pvalcheva
by New Contributor
  • 1919 Views
  • 1 replies
  • 0 kudos

Simba Spark Driver fails for big datasets in Excel

Hello, I am getting the following error when I want to extract data from Databricks via VBA code. The code for the connection is:Option ExplicitConst adStateClosed = 0Public CnAdo As New ADODB.ConnectionDim DSN_name As StringDim WB As WorkbookDim das...

pvalcheva_0-1750755864726.png
  • 1919 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The code you provided for connecting to Databricks via VBA appears structurally sound, but the cause of the error you are experiencing could stem from several typical issues encountered when using ADODB with Databricks ODBC connections from Excel VBA...

  • 0 kudos
Gustavo_Az
by Contributor
  • 2125 Views
  • 2 replies
  • 1 kudos

Resolved! Doubt with range_join hints optimization, using INSERT INTO REPLACE WHERE

HelloIm optmizing a big notebook and have encountered many times the tip from databricks that says "Unused range join hints". Reading the documentation for reference, I have been able to supress that warning in almost all cells, but some of then rema...

range_joins.JPG
  • 2125 Views
  • 2 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

There is no official documentation covering the use of range_join hints directly with the INSERT INTO ... REPLACE WHERE operation in Databricks—existing documentation around range joins focuses only on explicit joining operations, not on conditional ...

  • 1 kudos
1 More Replies
ChrisLawford_n1
by Contributor
  • 2156 Views
  • 1 replies
  • 2 kudos

Update for databricks-dlt pip package

Hello, With the recent changes to Delta Live Tables, I was wondering when the python stub will be updated to reflect the new methods that are available ?Link to the Pypi repo:databricks-dlt·PyPI

  • 2156 Views
  • 1 replies
  • 2 kudos
Latest Reply
mark_ott
Databricks Employee
  • 2 kudos

The Python stub for Delta Live Tables (DLT), which helps with local development by providing API specs, docstring references, and type hints, is available as the databricks-dlt package on PyPI. However, this library only provides interfaces to the DL...

  • 2 kudos
ChrisLawford_n1
by Contributor
  • 184 Views
  • 1 replies
  • 1 kudos

Network error on subsequent runs using serverless compute in DLT

Hello,When running on a serverless cluster in DLT our notebook first tries to install some python whls onto the cluster. We have noticed that when in development and running a pipeline many times over in a short space of time between runs that the pi...

  • 184 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

The error you’re seeing (“Network is unreachable” repeated during pip installs) on a DLT (Delta Live Table) serverless cluster, especially after the first successful run, is a common issue that appears to affect Databricks pipelines run repeatedly on...

  • 1 kudos
abhirupa7
by New Contributor
  • 284 Views
  • 2 replies
  • 1 kudos

Resolved! Databricks Workflow

I have a query. I have multiple job (workflow)present in my workspace. Those job runs regularly. Multiple task present in those jobs. Few task having notebook that contain for each code in it. now when a job runs that particular task execute the for ...

  • 284 Views
  • 2 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

To programmatically capture iteration-level information for tasks running inside a Databricks Workflow Job that uses the "for each" loop construct, you will primarily rely on the Databricks Jobs REST API (v2.1) and possibly the Databricks Python SDK....

  • 1 kudos
1 More Replies
nefflev1
by New Contributor
  • 228 Views
  • 1 replies
  • 1 kudos

VS Code Python file execution

Hi Everyone,I'm using the Databricks VS Code Extension to develop and deploy Asset Bundles. Usually we work with Notebooks and use the "Run File as Workflow" function. Now I'm trying to use pure python file for a new use case and tried to use the "Up...

  • 228 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

You're encountering a common issue when using the Databricks VS Code Extension's "Upload and Run File" with pure Python files, especially in a secure, VNet-injected Azure Databricks deployment. Here’s a direct summary of what’s happening and how you ...

  • 1 kudos
Akshay_Petkar
by Valued Contributor
  • 250 Views
  • 2 replies
  • 0 kudos

%run notebook fails in Job mode with Py4JJavaError (None.get), but works in interactive notebook

 Hi everyone,I’m facing an issue when executing a Databricks job where my notebook uses %run to include other notebooks. I have a final notebook added as a task in a job, and inside that notebook I use %run to call another notebook that contains all ...

  • 250 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

This issue with %run in Databricks notebooks—where everything works interactively in the UI, but fails in a job context with java.util.NoSuchElementException: None.get—is a relatively common pain point for users leveraging notebook modularization. Th...

  • 0 kudos
1 More Replies
Anoora
by New Contributor II
  • 198 Views
  • 2 replies
  • 0 kudos

Scheduling and triggering jobs based on time and frequency precedence

I have a table in Databricks that stores job information, including fields such as job_name, job_id, frequency, scheduled_time, and last_run_time.I want to run a query every 10 minutes that checks this table and triggers a job if the scheduled_time i...

Data Engineering
data engineering
jobs
scheduling
  • 198 Views
  • 2 replies
  • 0 kudos
Latest Reply
SamAdams
Contributor
  • 0 kudos

You could add a job with a scheduled based trigger that runs every 10 minutes. The task at the start of the job runs a SQL query against the job information table and uses the logic you described above to output a boolean value. Then feed that boolea...

  • 0 kudos
1 More Replies
EricCournarie
by New Contributor III
  • 152 Views
  • 2 replies
  • 2 kudos

Retrieving OBJECT values with the JDBC driver may lead to invalid JSON

Hello,Using the JDBC driver , I try to retrieve values in the ResultSet for a OBJECT type. Sadly, it returns invalid JSONGiven the SQLCREATE OR REPLACE TABLE main.eric.eric_complex_team (`id` INT,`nom` STRING,`infos` STRUCT<`age`: INT, `ville`: STRIN...

  • 152 Views
  • 2 replies
  • 2 kudos
Latest Reply
EricCournarie
New Contributor III
  • 2 kudos

Hello,  thanks for the quick response .Sadly I do not have the hand on the SQL request , so no way for me to modify it ... 

  • 2 kudos
1 More Replies
DylanStout
by Contributor
  • 3366 Views
  • 1 replies
  • 0 kudos

Pyspark ML tools

Cluster policies not letting us use Pyspark ML toolsIssue details: We have clusters available in our Databricks environment and our plan was to use functions and classes from "pyspark.ml" to process data and train our model in parallel across cores/n...

  • 3366 Views
  • 1 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

Hey @DylanStout ,   Thanks for laying out the symptoms clearly—this is a classic clash between Safe Spark (shared/high-concurrency) protections and multi-threaded/driver-mutating code paths.   What’s happening On clusters with the Shared/Safe Spark a...

  • 0 kudos
dhruvs2
by New Contributor II
  • 423 Views
  • 3 replies
  • 5 kudos

How to trigger a Databricks job only after multiple other jobs have completed

We have a use case where Job C should start only after both Job A and Job B have successfully completed.In Airflow, we achieve this using an ExternalTaskSensor to set dependencies across different DAGs.Is there a way to configure something similar in...

  • 423 Views
  • 3 replies
  • 5 kudos
Latest Reply
BS_THE_ANALYST
Esteemed Contributor III
  • 5 kudos

Hi @dhruvs2  .A Lakeflow Job consists of tasks. The tasks can be things like notebooks or other jobs. If you want to orchestrate many jobs, I'd agree that having a job to do this is your best bet . Then you can setup the dependencies as you require.I...

  • 5 kudos
2 More Replies
akeel-rehman
by New Contributor
  • 3148 Views
  • 1 replies
  • 0 kudos

Best Practices for Reusable Workflows & Cluster Management Across Repos.

Hi everyone,I am looking for best practices around reusable workflows in Databricks, particularly in these areas:Reusable Workflows Instead of Repetition: How can we define reusable workflows rather than repeating the same steps across multiple jobs?...

  • 3148 Views
  • 1 replies
  • 0 kudos
Latest Reply
AbhaySingh
Databricks Employee
  • 0 kudos

Here are my recommendations: 1. Databricks Asset Bundles (DABs) for reusable workflows   2. API-based triggering and Run Job Tasks for cross-repo workflows   3. Instance Pools as the #1 game-changer for cluster optimization (5-10 seconds vs 5-10 minu...

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels