cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

ralphchan
by New Contributor II
  • 3257 Views
  • 4 replies
  • 0 kudos

Connect Oracle Fusion (ERP / HCM) to Databricks

Any suggestion to connect Oracle Fusion (ERP/HCM) to Databricks?I have explored a few options including the use of Oracle Integration Cloud but it requires a lot of customization.

  • 3257 Views
  • 4 replies
  • 0 kudos
Latest Reply
nayan_wylde
Honored Contributor III
  • 0 kudos

I used Fivetran Oracle Fusion Connector in past. It is a fully managed ELT connector that extracts data from Oracle Fusion and loads it into Databricks.

  • 0 kudos
3 More Replies
cpollock
by New Contributor III
  • 18 Views
  • 2 replies
  • 0 kudos

Resolved! Getting NO_TABLES_IN_PIPELINE error in Lakeflow Declarative Pipelines

Yesterday (10/1) starting around 12 PM EST we starting getting the following error in our Lakeflow Declarative Pipelines (LDP) process.  We get this in environments where none of our code has changed.  I found some info on the serverless compute abou...

  • 18 Views
  • 2 replies
  • 0 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 0 kudos

Hi @cpollock Check the “Event log” and “Pipeline logs” in the Databricks UI for any clues.also can you please share screenshot as pasted in window, attachment is not really working and only scanning

  • 0 kudos
1 More Replies
DiskoSuperStar
by New Contributor
  • 26 Views
  • 1 replies
  • 0 kudos

DLT Flow Redeclaration Error After Service Upgrade

Hi, our delta live tables(Lakeflow declarative pipelines) pipeline started failing after the Sep 30 / Oct 1 service upgrade with the following error :AnalysisException: Cannot have multiple queries named `<table_name>_realtime_flow` for `<table_name>...

  • 26 Views
  • 1 replies
  • 0 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 0 kudos

Hi @DiskoSuperStar IT seems you’ve run into a recently enforced change in Databricks DLT/Lakeflow:Multiple flows (append or otherwise) targeting the same table must have unique names. actually it looks correct on your code. Check if your  table_info ...

  • 0 kudos
QuanSun
by New Contributor II
  • 1015 Views
  • 4 replies
  • 1 kudos

How to select performance mode for Databricks Delta Live Tables

Hi everyone,Based on the official link,For triggered pipelines, you can select the serverless compute performance mode using the Performance optimized setting in the pipeline scheduler. When this setting is disabled, the pipeline uses standard perfor...

  • 1015 Views
  • 4 replies
  • 1 kudos
Latest Reply
BF7
Contributor
  • 1 kudos

I would like an answer to his question also, I need to see how to turn this off, but any check box relating to performance optimization in my serverless pipeline does not show up.

  • 1 kudos
3 More Replies
Gvnreddy
by New Contributor
  • 84 Views
  • 3 replies
  • 4 kudos

Need Help to learn scala

Hi Enthusiasts, recently i joined company in that company they used to develope databricks notebook with Scala programming language perviously, i worked on Pyspark it was very easy for me by the way i have 3 years of experence in DE i need help to wh...

  • 84 Views
  • 3 replies
  • 4 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 4 kudos

fwiw: you do not have to be a scala wiz to work on spark in scala.Older spark articles are often about scala in spark (before python took over).You will notice it is a lot like pyspark, but way way better.  typing, immutability, things like leftfold ...

  • 4 kudos
2 More Replies
john77
by New Contributor
  • 88 Views
  • 5 replies
  • 1 kudos

Why ETL Pipelines and Jobs

I do notice that ETL Pipelines let's you run declarative SQL syntax such as DLT tables but you can do the same with Jobs if you use SQL as your task type. So why and when to use ETL Pipelines?

  • 88 Views
  • 5 replies
  • 1 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 1 kudos

Hi @john77 SQL Task Type : Simple, one-off SQL operations or batch jobs + you need to orchestrate a mix of notebooks, Python/Scala code, and SQL in a single workflowLakeflow Declarative Pipelines : Complex , production ETL jobs requires lineage , mon...

  • 1 kudos
4 More Replies
DivyaKumar
by New Contributor
  • 19 Views
  • 1 replies
  • 0 kudos

Databricks to Dataverse migration via ADF copy data

Hi team,I need to load data from databricks delta tables to dataverse tables and I have one unique id column which I am ensuring via mapping. Its datatype is GUID in dataverse and string in delta table. I ensured that column holds unique values. Sinc...

  • 19 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

That is not a valid guid.Dataverse will check this.http://guid.us/test/guid

  • 0 kudos
Brahmareddy
by Esteemed Contributor
  • 36 Views
  • 1 replies
  • 2 kudos

How Databricks Helped Me See Data Engineering Differently

Over the years working as a data engineer, I’ve started to see my role very differently. In the beginning, most of my focus was on building pipelines—extracting, transforming, and loading data so it could land in the right place. Pipelines were the g...

  • 36 Views
  • 1 replies
  • 2 kudos
Latest Reply
BS_THE_ANALYST
Esteemed Contributor II
  • 2 kudos

@Brahmareddy thanks for this! .Think you've nailed it on the head there. If the stakeholders trust the data and there's integrity, governance, and a single source of truth, you've got a recipe for a great product! Love this take @Brahmareddy . Really...

  • 2 kudos
LeoRickli
by New Contributor II
  • 707 Views
  • 1 replies
  • 0 kudos

Databricks Asset Bundles fails deploy but works on the GUI with same parameters

I'm running into an issue when running databricks bundle deploy when using job clusters.When I run databricks bundle deploy on a new workspace or after destroying previous resources, the deployment fails with the error: Error: cannot update job: At l...

  • 707 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Here are workarounds you can use until a fix is provided: Deploy Jobs Separately: Deploy the first job using DAB, then uncomment and deploy the second one as you found works. This avoids the conflict encountered when both are deployed together. For...

  • 0 kudos
noorbasha534
by Valued Contributor II
  • 328 Views
  • 6 replies
  • 4 kudos

Figure out stale tables/folders being loaded by auto-loader

Hello allWe have a pipeline which uses auto-loader to load data from cloud object storage (ADLS) to a delta table. We use directory listing at the moment. And there exist around 20000 folders to be verified in ADLS every 30 mins to check for new data...

  • 328 Views
  • 6 replies
  • 4 kudos
Latest Reply
BS_THE_ANALYST
Esteemed Contributor II
  • 4 kudos

@Krishna_S I didn't know about file detection modes, that's very cool! .@noorbasha534 according to the documentation, there is a piece around RockDB: https://docs.databricks.com/aws/en/ingestion/cloud-object-storage/auto-loader/#how-does-auto-loader-...

  • 4 kudos
5 More Replies
billfoster
by New Contributor II
  • 25285 Views
  • 10 replies
  • 7 kudos

how can I learn DataBricks

I am currently enrolled in data engineering boot camp. We go over various technologies azure , pyspark , airflow , Hadoop ,nosql,SQL, python. But not over something like databricks. I am in contact with lots of recent graduates who landed a job. Almo...

  • 25285 Views
  • 10 replies
  • 7 kudos
Latest Reply
SprunkiRetake
  • 7 kudos

yes, i often refer to the helpful tutorials at https://www.youtube.com/c/AdvancingAnalytics?reload=9&app=desktop Sprunki Retake

  • 7 kudos
9 More Replies
GJ2
by New Contributor II
  • 8634 Views
  • 9 replies
  • 1 kudos

Install the ODBC Driver 17 for SQL Server

Hi,I am not a Data Engineer, I want to connect to ssas. It looks like it can be connected through pyodbc. however looks like  I need to install "ODBC Driver 17 for SQL Server" using the following command. How do i install the driver on the cluster an...

GJ2_1-1739798450883.png
  • 8634 Views
  • 9 replies
  • 1 kudos
Latest Reply
ghoriimanki
  • 1 kudos

The format of the `table_name` argument you're supplying to the `jdbc_writer` method appears to be the cause of the issue you're seeing. A string containing exactly one period is expected to divide into two pieces by the line `schema, table = table_n...

  • 1 kudos
8 More Replies
devagya
by New Contributor
  • 1018 Views
  • 3 replies
  • 1 kudos

Infor Data Lake to Databricks

I'm working on this project which involves moving data from Infor to Databricks.Infor is somewhat of an enterprise solution. I could not find much resources on this. I could not even find any free trial option on their site.If anyone has experience w...

  • 1018 Views
  • 3 replies
  • 1 kudos
Latest Reply
Shirlzz
New Contributor II
  • 1 kudos

I specialise in data migration with Infor.What is your question, how to connect databricks to the infor datalake through the data fabric?

  • 1 kudos
2 More Replies
leireroman
by New Contributor III
  • 2513 Views
  • 2 replies
  • 2 kudos

Resolved! DBR 16.4 LTS - Spark 3.5.2 is not compatible with Delta Lake 3.3.1

I'm migrating to Databricks Runtime 16.4 LTS, which is using Spark 3.5.2 and Delta Lake 3.3.1 according to the documentation: Databricks Runtime 16.4 LTS - Azure Databricks | Microsoft LearnI've upgraded my conda environment to use those versions, bu...

Captura de pantalla 2025-06-09 084355.png
  • 2513 Views
  • 2 replies
  • 2 kudos
Latest Reply
SamAdams
Contributor
  • 2 kudos

@leireroman encountered the same and used an override (like a pip constraints.txt file or PDM resolution override specification) to make sure my local development environment matched the runtime.

  • 2 kudos
1 More Replies
adrianhernandez
by New Contributor III
  • 50 Views
  • 2 replies
  • 1 kudos

Resolved! Folder execute permissions

Hello,After reading multiple posts, going thru online forums, even asking AI I still don't have an answer for my questions. On the latest Databricks with unity catalog, what happens if I give users Execute permissions on a folder.Can they view the co...

  • 50 Views
  • 2 replies
  • 1 kudos
Latest Reply
adrianhernandez
New Contributor III
  • 1 kudos

Thanks for your response. That's what I imagined although could not confirm as my current project uses Unity Catalog and we are not allowed to run many commands including ACL related PySpark code.

  • 1 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels