cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Datalight
by Contributor
  • 10 Views
  • 0 replies
  • 0 kudos

Design Oracle Fusion SCM to Azure Databricks

Hello Techie,I am planning to migrate All module of Oracle fusion scm data to Azure Databricks.Do we have only option of BICC (Business Intelligence Cloud Connector), OR any other option avaialble.Can anyone please help me with reference architecture...

  • 10 Views
  • 0 replies
  • 0 kudos
intelliconnectq
by New Contributor II
  • 46 Views
  • 2 replies
  • 0 kudos

Resolved! Loading CSV from private S3 bucket

Trying to load a csv file from a private S3 bucketplease clarify requirements to do this- Can I do it in community edition (if yes then how)? How to do it in premium version?I have IAM role and I also access key & secret 

  • 46 Views
  • 2 replies
  • 0 kudos
Latest Reply
Coffee77
Contributor III
  • 0 kudos

Assuming you have these pre-requisites: A private S3 bucket (e.g., s3://my-private-bucket/data/file.csv)An IAM user or role with access (list/get) to that bucketThe AWS Access Key ID and Secret Access Key (client and secret)The most straightforward w...

  • 0 kudos
1 More Replies
Charansai
by New Contributor III
  • 21 Views
  • 0 replies
  • 0 kudos

How to use serverless clusters in DAB deployments with Unity Catalog in private network?

Hi everyone,I’m deploying Jobs and Pipelines using Databricks Asset Bundles (DAB) in an Azure Databricks workspace configured with private networking. I’m trying to use serverless compute for some workloads, but I’m running into issues when Unity Cat...

  • 21 Views
  • 0 replies
  • 0 kudos
Charansai
by New Contributor III
  • 30 Views
  • 1 replies
  • 0 kudos

Pipelines not included in Databricks Asset Bundles deployment

Hi all,I’m working with Databricks Asset Bundles (DAB) to build and deploy Jobs and pipelines across multiple environments in Azure Databricks.I can successfully deploy Jobs using bundles.However, when I try to deploy pipelines, I notice that the bun...

  • 30 Views
  • 1 replies
  • 0 kudos
Latest Reply
cdn_yyz_yul
New Contributor III
  • 0 kudos

This example helped me to deploy ETL pipelines as tasks in jobs to different workspaces.bundle-examples/lakeflow_pipelines_python at main · databricks/bundle-examples · GitHub

  • 0 kudos
Brahmareddy
by Esteemed Contributor
  • 102 Views
  • 2 replies
  • 5 kudos

Future of Movie Discovery: How I Built an AI Movie Recommendation Agent on Databricks Free Edition

As a data engineer deeply passionate about how data and AI can come together to create real-world impact, I’m excited to share my project for the Databricks Free Edition Hackathon 2025 — Future of Movie Discovery (FMD). Built entirely on Databricks F...

  • 102 Views
  • 2 replies
  • 5 kudos
Latest Reply
hasnat_unifeye
New Contributor
  • 5 kudos

Hi @Brahmareddy ,Really enjoyed your hackathon demo. you’ve set a high bar for NLP-focused projects. I picked up a lot from your approach and it’s definitely given me ideas to try out.For my hackathon entry, I took a similar direction using pyspark.m...

  • 5 kudos
1 More Replies
Hubert-Dudek
by Esteemed Contributor III
  • 25520 Views
  • 14 replies
  • 12 kudos

Resolved! dbutils or other magic way to get notebook name or cell title inside notebook cell

Not sure it exists but maybe there is some trick to get directly from python code:NotebookNameCellTitlejust working on some logger script shared between notebooks and it could make my life a bit easier

  • 25520 Views
  • 14 replies
  • 12 kudos
Latest Reply
rtullis
New Contributor II
  • 12 kudos

I got the solution to work in terms of printing the notebook that I was running; however, what if you have notebook A that calls a function that prints the notebook name, and you run notebook B that %runs notebook A?  I get the notebook B's name when...

  • 12 kudos
13 More Replies
kahrees
by New Contributor
  • 92 Views
  • 3 replies
  • 4 kudos

Resolved! DATA_SOURCE_NOT_FOUND Error with MongoDB (Suggestions in other similar posts have not worked)

I am trying to load data from MongoDB into Spark. I am using the Community/Free version of DataBricks so my Jupiter Notebook is in a Chrome browser.Here is my code:from pyspark.sql import SparkSession spark = SparkSession.builder \ .config("spar...

  • 92 Views
  • 3 replies
  • 4 kudos
Latest Reply
K_Anudeep
Databricks Employee
  • 4 kudos

Hey @kahrees , Good Day! I tested this internally, and I was able to reproduce the issue. Screenshot below:   You’re getting [DATA_SOURCE_NOT_FOUND] ... mongodb because the MongoDB Spark connector jar isn’t actually on your cluster’s classpath. On D...

  • 4 kudos
2 More Replies
eyalholzmann
by New Contributor
  • 102 Views
  • 3 replies
  • 1 kudos

Does VACUUM on Delta Lake also clean Iceberg metadata when using Iceberg Uniform feature?

I'm working with Delta tables using the Iceberg Uniform feature to enable Iceberg-compatible reads. I’m trying to understand how metadata cleanup works in this setup.Specifically, does the VACUUM operation—which removes old Delta Lake metadata based ...

  • 102 Views
  • 3 replies
  • 1 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 1 kudos

Here’s how to approach cleaning and maintaining Apache Iceberg metadata on Databricks, and how it differs from Delta workflows. First, know your table type For Unity Catalog–managed Iceberg tables, Databricks runs table maintenance for you (predicti...

  • 1 kudos
2 More Replies
pooja_bhumandla
by New Contributor III
  • 44 Views
  • 1 replies
  • 0 kudos

Should I enable Liquid Clustering based on table size distribution?

Hi everyone,I’m evaluating whether Liquid Clustering would be beneficial for the tables based on the sizes. Below is the size distribution of tables in my environment:Size Bucket Table Count Large (> 1 TB)3Medium (10 GB – 1 TB)284Small (< 10 GB)17,26...

  • 44 Views
  • 1 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

Greetings @pooja_bhumandla  Based on your size distribution, enabling Liquid Clustering can provide meaningful gains—but you’ll get the highest ROI by prioritizing your medium and large tables first and selectively applying it to small tables where q...

  • 0 kudos
Naveenkumar1811
by New Contributor
  • 25 Views
  • 1 replies
  • 0 kudos

Can we Change the ownership of Databricks Managed Secret to SP in Azure Data Bricks?

Hi Team,Earlier we faced an Issue where the jar file(Created by a old employee) in workspace directory is used as library in the cluster which is run from a SP. Since the employee left the org and the id got removed even though the SP is part of ADMI...

  • 25 Views
  • 1 replies
  • 0 kudos
Latest Reply
Coffee77
Contributor III
  • 0 kudos

That's the reason by which I try to deploy most part of resources with service principal accounts while using Databricks Asset Bundles. Avoid human identities whenever possible because they can indeed go away...I think you'll have to create another s...

  • 0 kudos
bidek56
by Contributor
  • 203 Views
  • 5 replies
  • 1 kudos

Resolved! Location of spark.scheduler.allocation.file

In DBR 164.LTS, I am trying to add the following Spark config: spark.scheduler.allocation.file: file:/Workspace/init/fairscheduler.xmlBut the all purpose cluster is throwing this error Spark error: Driver down cause: com.databricks.backend.daemon.dri...

  • 203 Views
  • 5 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

Here's some solutions without using DBFS..  Yes, there are solutions for using the Spark scheduler allocation file on Databricks without DBFS, but options are limited and depend on your environment and access controls. Alternatives to DBFS for Schedu...

  • 1 kudos
4 More Replies
Yuki
by Contributor
  • 103 Views
  • 4 replies
  • 1 kudos

Is there any way to run jobs from github actions and catch the results?

Hi all,Is there any way to run jobs from github actions and catch the results?Of course, I can do this if I use the API or CLI.But I found the actions for notebook: https://github.com/marketplace/actions/run-databricks-notebook  Compared to this, wri...

  • 103 Views
  • 4 replies
  • 1 kudos
Latest Reply
Yuki
Contributor
  • 1 kudos

OK, thank you for your advices, I will consider to use asset bundles for this.

  • 1 kudos
3 More Replies
Naveenkumar1811
by New Contributor
  • 94 Views
  • 2 replies
  • 0 kudos

What is the Best Practice of Maintaining the Delta table loaded in Streaming?

Hi Team,We have our Bronze(append) Silver(append) and Gold(merge) Tables loaded using spark streaming continuously with trigger as processing time(3 secs).We Also Run our Maintenance Job on the Table like OPTIMIZE,VACCUM and we perform DELETE for som...

  • 94 Views
  • 2 replies
  • 0 kudos
Latest Reply
Naveenkumar1811
New Contributor
  • 0 kudos

Hi Mark,But the real problem is our streaming job runs 365 days 24 *7 and we cant afford any further latency to our data flowing to gold layer. We don't have any window to pause or slower our streaming and we continuously get the data feed actually s...

  • 0 kudos
1 More Replies
hidden
by New Contributor II
  • 44 Views
  • 1 replies
  • 0 kudos

DLT PARAMETERIZATION FROM JOBS PARAMETERS

I have created a dlt pipeline notebook which creates tables based on a config file that has the configuration of the tables that need to be created . now what i want is i want to run my pipeline every 30 min for 4 tables from config and every 3 hours...

  • 44 Views
  • 1 replies
  • 0 kudos
Latest Reply
Coffee77
Contributor III
  • 0 kudos

Define "parameters" in job as usual and then, try to capture them in DLT by using similar code to this one:dlt.conf.get("PARAMETER_NAME", "PARAMETER_DEFAULT_VALUE")It should get parameter values from job if value exists, otherwise it'll set the defau...

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels