cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

jeremy98
by Honored Contributor
  • 4149 Views
  • 1 replies
  • 1 kudos

Environment set up in serveless notebook task

Hi community,Is there a way to install dependencies inside a notebook task using serveless compute using Databricks Asset Bundle? Is there a way to avoid installing everytime for each serverless task that compose a job the dependencies (or the librar...

  • 4149 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

For Databricks serverless compute jobs using Asset Bundles, custom dependencies (such as Python packages or wheel files) cannot be pre-installed on shared serverless infrastructure across job tasks as you can with traditional job clusters. Instead, d...

  • 1 kudos
Maser_AZ
by New Contributor II
  • 4680 Views
  • 1 replies
  • 0 kudos

16.2 (includes Apache Spark 3.5.2, Scala 2.12) cluster in community edition taking long time

16.2 (includes Apache Spark 3.5.2, Scala 2.12) cluster in community edition taking long time to start.I m trying to launch 16.2 DBR but it seems the cluster which is one node is taking long time . Is this a bug in the community edition ?Here is the u...

Data Engineering
Databricks
  • 4680 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The long startup time for a Databricks Runtime 16.2 (Apache Spark 3.5.2, Scala 2.12) single-node cluster in Databricks Community Edition is a known issue and not unique to your setup. Many users have reported this situation, with some clusters taking...

  • 0 kudos
Abishrp
by Contributor
  • 3776 Views
  • 1 replies
  • 0 kudos

Product code of Databricks in AWS CUR report

I need to know what is the productCode of Databricks in CUR report. Whether the productCode is same for all user?

  • 3776 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

In the AWS Cost and Usage Report (CUR), the productCode for Databricks is used to identify costs attributed to Databricks usage within your AWS environment. The value that appears in the lineItem/ProductCode column for Databricks is typically "Databr...

  • 0 kudos
Nick_Pacey
by New Contributor III
  • 4032 Views
  • 1 replies
  • 0 kudos

Foreign Catalog error connecting to SQL Server 2008 R2

Hi,Is there a limitation or know issue when creating a foreign catalog to a SQL Server 2008 R2?We are successfully able to connect to this SQL Server through a JDBC connection string.  To make this work, we have to switch the Java encrypt flag to fal...

  • 4032 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

There are known limitations and issues when connecting to SQL Server 2008 R2, particularly around encryption and JDBC settings, which can manifest as errors in federated catalog operations—even though a direct JDBC connection might succeed if the "en...

  • 0 kudos
Kabil
by New Contributor
  • 3917 Views
  • 1 replies
  • 0 kudos

useing dlt metadata as runtime parameter

i have started using DLT pipeline, and i have common code which is used by multiple DLT pipeline. now i need to read metadata information like name of the pipeline and start time of the pipeline during run time, but since im using common code and pip...

  • 3917 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

To dynamically access metadata like the pipeline name and start time at runtime in your common code for Delta Live Tables (DLT) pipelines, you should leverage runtime context and built-in metadata features provided by the DLT or related orchestrators...

  • 0 kudos
TamD
by Contributor
  • 4082 Views
  • 1 replies
  • 0 kudos

ModuleNotFoundError Importing fuction modules to DLT pipelines

Following best practice, we want to avoid reusing code by putting commonly used transformations into function libraries and then importing and calling those functions where required.We also want to follow Databricks recommendations to use serverless ...

  • 4082 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

You are correctly following Databricks’ recommendation to store shared code in Python files and import them into your notebooks, especially for Delta Live Tables (DLT) pipelines and serverless environments. However, import path issues are common, par...

  • 0 kudos
cszczotka
by New Contributor III
  • 4610 Views
  • 1 replies
  • 0 kudos

Delta sharing open issue with access data on storage

Hi, I have configured delta sharing for external consumer in Azure Databricks. Azure Databricks and storage account are in VNET, no public access. The storage account has also disabled account key access and  shared key authorization.I'm running delt...

  • 4610 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Delta Sharing in Azure Databricks allows sharing datasets across clouds and with external consumers, but when used in a tightly controlled network environment (private endpoints, no public access, restricted storage account authentication), it behave...

  • 0 kudos
dc-rnc
by Contributor
  • 4083 Views
  • 2 replies
  • 2 kudos

Issue pulling Docker Image on Databricks Cluster through Azure Container Registry

Hi Community.Essentially, we're using the ACR to push our custom Docker Image, then we would like to pull it to create a Databricks cluster. However, during the cluster creation, we got the following error:I'm convinced we tried to authenticate in al...

dcrnc_1-1746546138450.png dcrnc_0-1746544625988.png
  • 4083 Views
  • 2 replies
  • 2 kudos
Latest Reply
mark_ott
Databricks Employee
  • 2 kudos

You are experiencing an authentication issue when trying to use a custom Docker image from Azure Container Registry (ACR) with Databricks clusters, despite successfully using admin tokens and service principals with acrpull permissions in other conte...

  • 2 kudos
1 More Replies
jeremy98
by Honored Contributor
  • 4169 Views
  • 1 replies
  • 0 kudos

Hydra configuration and job parameters of DABs

Hello Community,I'm trying to create a job pipeline in Databricks that runs a spark_python_task, which executes a Python script configured with Hydra. The script's configuration file defines parameters, such as id.How can I pass this parameter at the...

  • 4169 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

You can pass and override configuration parameters for Hydra in a Databricks spark_python_task by specifying job-level parameters (as arguments) and using environment variables or Hydra’s command line overrides. For accessing secrets with dbutils.sec...

  • 0 kudos
siddharthsomni
by New Contributor
  • 3327 Views
  • 2 replies
  • 0 kudos

Databricks Bundle Asset - Notebook-based bundling alternative to CLI approach

Hello All - I have a scenario where we want to do entire bundling and packaging in notebook to deploy Jobs using Databricks Asset Bundle without using CLI or VS Code. I didn't find any material or reference that provides insights. Any input would be ...

  • 3327 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Deploying Databricks Asset Bundles entirely from a notebook—without using the CLI or VS Code—is not a standard workflow but can be orchestrated using newer features in the Databricks workspace UI and by leveraging programmatic workspace operations. D...

  • 0 kudos
1 More Replies
Michał
by New Contributor III
  • 2493 Views
  • 6 replies
  • 3 kudos

Resolved! how to process a streaming lakeflow declarative pipeline in batches

Hi, I've got a problem and I have run out of ideas as to what else I can try. Maybe you can help? I've got a delta table with hundreds millions of records on which I have to perform relatively expensive operations. I'd like to be able to process some...

  • 2493 Views
  • 6 replies
  • 3 kudos
Latest Reply
Michał
New Contributor III
  • 3 kudos

thanks @mmayorga 

  • 3 kudos
5 More Replies
JameDavi_51481
by Contributor
  • 11312 Views
  • 11 replies
  • 13 kudos

Can we add tags to Unity Catalog through Terraform?

We use Terraform to manage most of our infrastructure, and I would like to extend this to Unity Catalog. However, we are extensive users of tagging to categorize our datasets, and the only programmatic method I can find for adding tags is to use SQL ...

  • 11312 Views
  • 11 replies
  • 13 kudos
Latest Reply
jlieow
Databricks Employee
  • 13 kudos

In case anyone comes across this, have a look at databricks_entity_tag_assignment and see if it suits your needs.

  • 13 kudos
10 More Replies
DataGirl
by Databricks Partner
  • 17656 Views
  • 7 replies
  • 2 kudos

Multi value parameter on Power BI Paginated / SSRS connected to databricks using ODBC

Hi All, I'm wondering if anyone has had any luck setting up multi valued parameters on SSRS using ODBC connection to Databricks? I'm getting "Cannot add multi value query parameter" error everytime I change my parameter to multi value. In the query s...

  • 17656 Views
  • 7 replies
  • 2 kudos
Latest Reply
kashti123
Databricks Partner
  • 2 kudos

Hi I am also trying to set multi value parameters using the dynamic sql expression. However, the report gives error that multi value parameters are not supported by the data extension. Any help on this would be highly appreciated. Thanks , Drishti

  • 2 kudos
6 More Replies
kcyugesh
by New Contributor II
  • 376 Views
  • 1 replies
  • 1 kudos

Resolved! Delta live table not showing in workspace (Azure databricks with premium plan)

- I have a premium plan and owner level access 

Screenshot 2025-11-07 at 12.15.29 PM.png Screenshot 2025-11-07 at 12.22.33 PM.png
  • 376 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @kcyugesh ,They changed the name from DLT to Lakeflow Declartive Pipelines, so you won't find DLT name in UI.Click job & pipelines and then ETL pipeline to access declarative pipeline editior 

  • 1 kudos
DE5
by New Contributor
  • 239 Views
  • 1 replies
  • 1 kudos

Resolved! Unable to see the Assistant suggested code and current code side by side

Hi,I'm unable to see the Assistant suggested code and current code side by side. Previously I'm able to see the my code and Assistant suggested code side by side which helped me to understand the changes. Please suggest if there is any ways for it. T...

  • 239 Views
  • 1 replies
  • 1 kudos
Latest Reply
ManojkMohan
Honored Contributor II
  • 1 kudos

@DE5 Some recent updates moved comparison features into the SQL Editor side panel or rely on “Cell Actions,” where you can generate code or format it and then see differences before applying changeshttps://www.databricks.com/blog/introducing-new-sql-...

  • 1 kudos
Labels