cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Galih
by New Contributor II
  • 591 Views
  • 3 replies
  • 4 kudos

Resolved! Spark structured streaming- calculate signal, help required! 🙏

Hello everyone!I’m very very new to Spark Structured Streaming, and not a data engineer I would appreciate guidance on how to efficiently process streaming data and emit only changed aggregate results over multiple time windows.Input Stream:Source: A...

  • 591 Views
  • 3 replies
  • 4 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 4 kudos

I would implement stateful streaming by using transformWithStateInPandas to keep the state and implement the logic there. I would avoid doing stream-stream JOINs.

  • 4 kudos
2 More Replies
chirag_nagar
by New Contributor
  • 3623 Views
  • 12 replies
  • 2 kudos

Seeking Guidance on Migrating Informatica PowerCenter Workflows to Databricks using Lakebridge

Hi everyone,I hope you're doing well.I'm currently exploring options to migrate a significant number of Informatica PowerCenter workflows and mappings to Databricks. During my research, I came across Lakebridge, especially its integration with BladeB...

  • 3623 Views
  • 12 replies
  • 2 kudos
Latest Reply
AnnaKing
New Contributor II
  • 2 kudos

Hi Chirag. At Kanerika Inc,, we've built a migration accelerator that automates 80% of the Informatica to Databricks migration process, saving you significant time, effort, and resources. You can check out the demo video of the same here - https://ww...

  • 2 kudos
11 More Replies
bercaakbayir
by New Contributor
  • 182 Views
  • 1 replies
  • 0 kudos

Data Ingestion - Missing Permission

Hi, I would like to use Data Ingestion through fivetran connectors to get data from external data source to databricks but I am getting missing permission error. I already have admin permission. I kindly ask your help regarding to this situation.Look...

  • 182 Views
  • 1 replies
  • 0 kudos
Latest Reply
Raman_Unifeye
Contributor III
  • 0 kudos

@bercaakbayir - 2 areas to look at for permissions.Unity Catalog PermissionDestination‑level permissionsPlease Check,UC enabled for your workspace. [Metastore Admin, not workspace Admin]CREATE permissions on the target catalog - User or SP should hav...

  • 0 kudos
der
by Contributor III
  • 1344 Views
  • 7 replies
  • 5 kudos

Resolved! EXCEL_DATA_SOURCE_NOT_ENABLED Excel data source is not enabled in this cluster

I want to read an Excel xlsx file on DBR 17.3. On the Cluster the library dev.mauch:spark-excel_2.13:4.0.0_0.31.2 is installed. V1 Implementation works fine:df = spark.read.format("dev.mauch.spark.excel").schema(schema).load(excel_file) display(df)V2...

  • 1344 Views
  • 7 replies
  • 5 kudos
Latest Reply
der
Contributor III
  • 5 kudos

I reached out to Databricks support and they fixed it with December 2025 maintenance update. Now the open source excel reader and the new build in should work.https://learn.microsoft.com/en-gb/azure/databricks/query/formats/excel 

  • 5 kudos
6 More Replies
pdiamond
by Contributor
  • 222 Views
  • 1 replies
  • 0 kudos

Lakebase error logs

Anyone know where to see any logs related to Lakebase/Postgres? I have a Tableau Prep flow that is failing but the error is not clear and I'm trying to find out what the database is capturing.

  • 222 Views
  • 1 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @pdiamond ,You can try to use Lakebase monitoring tools to capture query generated by Tableau Prep.Monitor | Databricks on AWSAlternatively, it seems that you can also use external monitoring tools. So you can connect to your lakebase instance usi...

  • 0 kudos
dvd_lg_bricks
by New Contributor III
  • 832 Views
  • 10 replies
  • 3 kudos

Resolved! Questions About Workers and Executors Configuration in Databricks

Hi everyone, sorry, I’m new here. I’m considering migrating to Databricks, but I need to clarify a few things first.When I define and launch an application, I see that I can specify the number of workers, and then later configure the number of execut...

  • 832 Views
  • 10 replies
  • 3 kudos
Latest Reply
Abeshek
New Contributor III
  • 3 kudos

Your Databricks question about workers versus executors. Many teams encounter the same sizing and configuration issues when evaluating a migration. At Kanerika, we help companies plan cluster architecture, optimize Spark workloads, and avoid overspen...

  • 3 kudos
9 More Replies
michal1228
by New Contributor II
  • 522 Views
  • 4 replies
  • 0 kudos

Import Python Modules with Git Folder Error

Dear Databricks Community, We encountered Bug in behaviour of import method explained in documentation https://learn.microsoft.com/en-us/azure/databricks/files/workspace-modules#autoreload-for-python-modules. Couple months ago we migrated our pipelin...

  • 522 Views
  • 4 replies
  • 0 kudos
Latest Reply
michal1228
New Contributor II
  • 0 kudos

We're using DBR version 16.4

  • 0 kudos
3 More Replies
Fatimah-Tariq
by New Contributor III
  • 934 Views
  • 7 replies
  • 4 kudos

Resolved! Writing to Foreign catalog

I have a running notebook job where I am doing some processing and writing the tables in a foreign catalog. It has been running successfully for about an year. The job is scheduled and runs on job cluster with DBR 16.2Recently, I had to add new noteb...

  • 934 Views
  • 7 replies
  • 4 kudos
Latest Reply
Fatimah-Tariq
New Contributor III
  • 4 kudos

Thank you @Louis_Frolio! your suggestions really helped me understanding the scenario.

  • 4 kudos
6 More Replies
skuvisk
by New Contributor II
  • 366 Views
  • 2 replies
  • 1 kudos

Resolved! CLS function with lookup fails on dates

Hello,I'm conducting research on utilizing CLS in a project. We are implementing a lookup table to determine what tags a user can see. The CLS function looks like this:CREATE OR REPLACE FUNCTION {catalog}.{schema}.mask_column(value VARIANT, tag STRIN...

  • 366 Views
  • 2 replies
  • 1 kudos
Latest Reply
skuvisk
New Contributor II
  • 1 kudos

Thank you for an insightful answer @Poorva21. I conclude from your reasoning that this is the result of an optimization/engine error. It seems like I will need to resort to a workaround for the date columns then...

  • 1 kudos
1 More Replies
Jarno
by New Contributor III
  • 842 Views
  • 4 replies
  • 1 kudos

Dangerous implicit type conversions on 17.3 LTS.

Starting with DBR 17 running Spark 4.0, spark.sql.ansi.enabled is set to true by default. With the flag enabled, strings are implicitly converted to numbers in a very dangerous manner. ConsiderSELECT 123='123';SELECT 123='123X';The first one is succe...

  • 842 Views
  • 4 replies
  • 1 kudos
Latest Reply
Jarno
New Contributor III
  • 1 kudos

FYI, it seems I was mistaken about the behaviour of '::' on Spark 4.0.1. It does indeed work like CAST on both DBR 17.3 and Spark 4.0.1 and raises an exception on '123X'::int. The '?::' operator seems to be a Databricks only extension at the moment (...

  • 1 kudos
3 More Replies
prashant151
by New Contributor II
  • 392 Views
  • 2 replies
  • 3 kudos

Resolved! Using Init Scipt to execute python notebook at all-purpose cluster level

HiWe have setup.py in my databricks workspace.This script is executed in other transformation scripts using%run /Workspace/Common/setup.pywhich consume lot of time. This setup.py internally calls other utilities notebooks using %run%run /Workspace/Co...

  • 392 Views
  • 2 replies
  • 3 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 3 kudos

You can’t “%run a notebook” from a cluster init script—init scripts are shell-only and meant for environment setup (install libs, set env vars), not for executing notebooks or sharing Python state across sessions. +1 to what @Raman_Unifeye has told. ...

  • 3 kudos
1 More Replies
nick_heybuddy
by New Contributor II
  • 310 Views
  • 1 replies
  • 2 kudos

Notebooks suddenly fails to retrieve Databricks secrets

At around 5:30 am (UTC+11) this morning, a number of our scheduled serverless notebook jobs started failing when attempting to retrieve Databricks secrets.We are able to retrieve the secrets using the databricks CLI and the jobs are run as a user tha...

Screenshot 2025-12-12 at 8.46.44 am.png Screenshot 2025-12-12 at 8.47.57 am.png
  • 310 Views
  • 1 replies
  • 2 kudos
Latest Reply
liu
Contributor
  • 2 kudos

me tooBut it looks like there hasn't been any official reply regarding this matter yet.

  • 2 kudos
demo-user
by New Contributor II
  • 343 Views
  • 3 replies
  • 0 kudos

Connecting to an S3 compatible bucket

Hi everyone,I’m trying to connect Databricks to an S3-compatible bucket using a custom endpoint URL and access keys.I’m using an Express account with Serverless SQL Warehouses, but the only external storage options I see are AWS IAM roles or Cloudfla...

  • 343 Views
  • 3 replies
  • 0 kudos
Latest Reply
Raman_Unifeye
Contributor III
  • 0 kudos

Serverless compute does not support setting most Apache Spark configuration properties irrespective of Enterprise Tier as dB fully manages the underlying infrastructure.

  • 0 kudos
2 More Replies
lucami
by Contributor
  • 483 Views
  • 3 replies
  • 4 kudos

Resolved! What's the difference between dbmanagedidentity and a storage credential based on managed identity?

I’m looking for guidance on the differences between:dbmanagedidentity (the workspace-managed identity), andUnity Catalog storage credentials based on Azure Managed IdentitySpecifically, I’d like to understand:What are the key differences between thes...

  • 483 Views
  • 3 replies
  • 4 kudos
Latest Reply
Raman_Unifeye
Contributor III
  • 4 kudos

use dbmanageidentity for non‑storage Azure services, such as Cosmos DB, Azure SQL, Event Hub, Key vault.

  • 4 kudos
2 More Replies
Malthe
by Contributor III
  • 817 Views
  • 5 replies
  • 6 kudos

Self-referential foreign key constraint for streaming tables

When defining a streaming tables using DLT (declarative pipelines), we can provide a schema which lets us define primary and foreign key constraints.However, references to self, i.e. the defining table, are not currently allowed (you get a "table not...

  • 817 Views
  • 5 replies
  • 6 kudos
Latest Reply
Malthe
Contributor III
  • 6 kudos

Each of these workarounds give up the optimizations that are enabled by the use of key constraints.

  • 6 kudos
4 More Replies
Labels