cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

akuma643
by New Contributor II
  • 3462 Views
  • 2 replies
  • 0 kudos

The authentication value "ActiveDirectoryManagedIdentity" is not valid.

Hi Team,i am trying to connect to SQL server hosted in azure vm using Entra id authentication from Databricks.("authentication", "ActiveDirectoryManagedIdentity")Below is the notebook script i am using. driver = "com.microsoft.sqlserver.jdbc.SQLServe...

  • 3462 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

You are encountering an error because the default SQL Server JDBC driver bundled with Databricks may not fully support the authentication value "ActiveDirectoryManagedIdentity"—this option requires at least version 10.2.0 of the Microsoft SQL Server ...

  • 0 kudos
1 More Replies
ADuma
by New Contributor III
  • 3397 Views
  • 1 replies
  • 0 kudos

Strcutured Streaming with queue in separate storage account

Hello,we are running a structured streaming job which consumes zipped Json files that arrive in our Azure Prod storage account. We are using AutoLoader and have set up an Eventgrid Queue which we pass to the streaming job using cloudFiles.queueName. ...

  • 3397 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

You are attempting to have your Test Databricks streaming job consume files that arrive in your Prod storage, using AutoLoader and EventGrid notifications, without physically copying the data or EventGrid queue to Test. The core challenge is that Eve...

  • 0 kudos
turagittech
by Contributor
  • 3367 Views
  • 1 replies
  • 0 kudos

Identify source of data in query

Hi All,I have an issue. I have several databases with the same schemas I need to source data from. Those databases are going to end up aggregated in a data warehouse. The problem is the id column in each means different things. Example: a client id i...

  • 3367 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Migrating from Data Factory to Databricks for ETL and warehousing is a solid choice, especially for flexibility and cost-effectiveness in data engineering projects. The core issue—disambiguating “id” fields that are only unique within each source dat...

  • 0 kudos
jeremy98
by Honored Contributor
  • 3968 Views
  • 2 replies
  • 0 kudos

Best practice on how to set up a medallion architecture pipelines inside DAB

Hi Community,My team and I are working on refactoring our folder repository structure. Currently, I have been placing pipelines related to the Medallion architecture inside a folder named notebook/. However, I believe they should be moved to src/ sin...

  • 3968 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Refactoring your folder structure and naming conventions for Medallion architecture pipelines is an essential step to keep code maintainable and intuitive. Based on your context, shifting these pipelines from notebook/ to src/ is a solid move, especi...

  • 0 kudos
1 More Replies
MaximeGendre
by New Contributor III
  • 3349 Views
  • 2 replies
  • 0 kudos

RLS function : concat vs list

Hello all, I'm designing a function to implement RLS on Unity Catalog for multiple tables of different size (1k to 10G rows).RLS will be applied on two columns and 150+ groups.I wonder what would be more performant :Solution 1: exhaustive (boring) li...

  • 3349 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The more performant solution for Row-Level Security (RLS) in Unity Catalog, when applying to two columns and 150+ groups, generally depends on how much of the access check logic can be pushed into efficient, indexable predicates versus computed at ru...

  • 0 kudos
1 More Replies
kenmyers-8451
by Contributor
  • 3580 Views
  • 2 replies
  • 0 kudos

Long runtimes on simple copying of data

Hi my team has been trying to identify areas where we can improve our processes. We have some long runtimes on processes that have multiple joins and aggregations. To create a baseline we have been running tests on a simple select and write operation...

kenmyers8451_0-1739400824751.png
  • 3580 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Your slow Spark runtime and unexpectedly long WholeStageCodeGen compute times are likely tied to a mix of Delta Lake features (especially deletion vectors), Spark’s physical plan, and partition handling. Here’s a detailed breakdown and advice based o...

  • 0 kudos
1 More Replies
saadi
by New Contributor
  • 3340 Views
  • 1 replies
  • 0 kudos

Could not connect Self Hosted MySQL Database in Azure Databricks

Hi,I am trying to connect a self-hosted MySQL database in Databricks but keep encountering errors.Database Setup:The MySQL database is hosted on a VM.We use DBeaver or Navicat to query it.Connection to the database requires an active Azure VPN Client...

  • 3340 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

To connect a self-hosted MySQL database (on a VM, Azure VPN required) to Databricks, you need several components to align: network access from Databricks to MySQL, proper JDBC connector configuration, and correct authentication. This setup is common ...

  • 0 kudos
nishg
by New Contributor II
  • 3244 Views
  • 1 replies
  • 0 kudos

Upgraded cluster to 16.1/16.2 and upload data(append) to elastic index is failling

I have updated compute cluster to both databricks version 16.1 and 16.2 and run the workflow to append data into elastic index but it started failing with below error. The same job is working fine with databricks version 15.  Let me know if anyone co...

  • 3244 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Your error is a known issue appearing after upgrading Databricks clusters to versions 16.1 and 16.2, specifically when running workflows to append data into an Elasticsearch index. This error—"Path must be absolute: myindex/_delta_log"—indicates a ch...

  • 0 kudos
Sujith_i
by New Contributor
  • 3330 Views
  • 1 replies
  • 1 kudos

databricks sdk for python authentication failing

I am trying to use databricks sdk for python to do some account level operations like creating groups and created a databricks config file locally n provided the profile name as argument to AccountClient but authentication keeps failing. the same con...

  • 3330 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

Authentication for account-level operations with Databricks SDK for Python requires more than just referencing the profile name in your local .databrickscfg file. While the CLI consults .databrickscfg for profiles and can use them directly, the SDK's...

  • 1 kudos
AvneeshSingh
by New Contributor
  • 3279 Views
  • 2 replies
  • 1 kudos

Autloader Data Reprocess

Hi ,If possible can any please help me with some autloader options I have 2 open queries ,(i) Let assume I am running some autoloader stream and if my job fails, so instead of resetting the whole checkpoint, I want to run stream from specified timest...

Data Engineering
autoloader
  • 3279 Views
  • 2 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

In Databricks Autoloader, controlling the starting point for streaming data after a job failure requires careful management of checkpoints and configuration options. By default, Autoloader uses checkpoints to remember where the stream last left off, ...

  • 1 kudos
1 More Replies
Nidhig
by Contributor
  • 60 Views
  • 1 replies
  • 2 kudos

Resolved! Global Parameter at the Pipeline level in Lakeflow Job

Hi ,any work around or Databricks can enable global parameters feature at the pipeline level in the lakeflow job.Currently I am working on migrating adf pipeline schedule set up to lakeflow job. 

  • 60 Views
  • 1 replies
  • 2 kudos
Latest Reply
mark_ott
Databricks Employee
  • 2 kudos

Databricks Lakeflow Declarative Pipelines do not currently support truly global parameters at the pipeline level in the same way that Azure Data Factory (ADF) allows, but there are workarounds that enable parameterization to streamline migration from...

  • 2 kudos
VaDim
by New Contributor III
  • 80 Views
  • 1 replies
  • 0 kudos

transformWithStateInPandas. Invalid pickle opcode when updating ValueState with large (float) array

I am getting an error when the entity I need to store in a ValueState is a large array (over 15k-20k items). No error (and works correctly) if I trim the array to under 10k samples. The same error is raised when using it as a value for MapState or as...

  • 80 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The error you’re facing, specifically PySparkRuntimeError: Error updating value state: invalid pickle opcod, usually points to a serialization (pickling) problem when storing large arrays in Flink/Spark state such as ValueState, ListState, or MapStat...

  • 0 kudos
SamAdams
by Contributor
  • 48 Views
  • 1 replies
  • 0 kudos

Time window for "All tables are updated" option in job Table Update Trigger

I've been using the Table Update Trigger for some SQL alert workflows. I have a job that uses 3 tables with an "All tables updated" trigger:Table 1 was updated at 07:20 UTCTable 2 was updated at 16:48 UTCTable 3 was updated at 16:50 UTC-> Job is trig...

Data Engineering
jobs
TableUpdateTrigger
  • 48 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

There is no fixed or documented “window” time for the interval between updates to all monitored tables before a job with an "All tables updated" trigger runs in Databricks. The job is triggered as soon as every table in the set has seen at least one ...

  • 0 kudos
deano2025
by New Contributor II
  • 38 Views
  • 0 replies
  • 0 kudos

Databricks asset bundles CI/CD design for github actions

We are wanting to use Databricks asset bundles and deploy code changes and tests using github actions. We have seen lots of content online, but nothing concrete on how this is done at scale. So I'm wondering, if we have many changes and therefore man...

Data Engineering
asset bundles
  • 38 Views
  • 0 replies
  • 0 kudos
ak5har
by New Contributor II
  • 2718 Views
  • 9 replies
  • 2 kudos

Databricks connection to on-prem cloudera

Hello,     we are trying to evaluate Databricks solution to extract the data from existing cloudera schema hosted on physical server. We are using the Databricks serverless compute provided by databricks express setup and we assume we will not need t...

  • 2718 Views
  • 9 replies
  • 2 kudos
Latest Reply
Adrian_Ashley
New Contributor
  • 2 kudos

I work for a databricks partner called Cirata.  Our Data migrator offering allows  both data and metadata replication  from cloudera to be delivered to the databricks environment , whether this is just delivering it to the ADLS2 object storage or to ...

  • 2 kudos
8 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels