cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

OmarE
by New Contributor II
  • 3665 Views
  • 1 replies
  • 1 kudos

Streamlit Databricks App Compute Scaling

I have a streamlit Databricks app and I’m looking to increase the compute resources. According to the documentation and the current settings, the app is limited to 2 vCPUs and 6 GB of memory. Is there a way to adjust these limits or add more resource...

  • 3665 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

You can increase compute resources for your Streamlit Databricks app, but this requires explicitly configuring the compute size in the Databricks app management UI or via deployment configuration—environment variables like DATABRICKS_CLUSTER_ID alone...

  • 1 kudos
Arunraja
by New Contributor II
  • 3334 Views
  • 1 replies
  • 0 kudos

AI BI Genie throwing internal error

For any prompt I am getting INTERNAL_ERROR: AI service did not respond with a valid answer

  • 3334 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The "INTERNAL_ERROR: AI service did not respond with a valid answer" in Databricks AI/BI Genie typically means the Genie service failed to process your query, often due to one of a few common issues. This can include problems with the table existence...

  • 0 kudos
turagittech
by Contributor
  • 3473 Views
  • 1 replies
  • 0 kudos

Finding all folder paths in a blob store connected via UC external connetion

Hi All,I need to easily find all the paths in a blob store to find the files and load them. I have tried using Azure Blob storage connection in python and I have a solution that works it is very slow. I was speaking to a data engineer, and he suggest...

  • 3473 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The most efficient way to list all file paths in an Azure Blob Storage container from Databricks, especially when Hierarchical Namespace (HNS) is not enabled, is to use Azure SDKs targeting the blob flat namespace directly rather than filesystem prot...

  • 0 kudos
Sega2
by New Contributor III
  • 3634 Views
  • 2 replies
  • 1 kudos

Debugger freezes when calling spark.sql with dbx connect

I have just created a simple bundle with databricks, and is using Databricks connect to debug locally. This is my script:from pyspark.sql import SparkSession, DataFrame def get_taxis(spark: SparkSession) -> DataFrame: return spark.read.table("samp...

Sega2_1-1740135258051.png Sega2_0-1740135225882.png
  • 3634 Views
  • 2 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

The issue you're experiencing—where your script freezes in VS Code when running spark.sql locally using Databricks Connect, but works correctly when deployed—can result from several common causes related to Databricks Connect configuration, networkin...

  • 1 kudos
1 More Replies
akshaym0056
by New Contributor
  • 3579 Views
  • 2 replies
  • 0 kudos

How to Define Constants at Bundle Level in Databricks Asset Bundles for Use in Notebooks?

I'm working with Databricks Asset Bundles and need to define constants at the bundle level based on the target environment. These constants will be used inside Databricks notebooks.For example, I want a constant gold_catalog to take different values ...

  • 3579 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Yes, you can define environment-specific constants at the bundle level in Databricks Asset Bundles and make them accessible inside Databricks notebooks, without relying on task-level parameters. This can be done using environment variables, bundle co...

  • 0 kudos
1 More Replies
Databricks36
by New Contributor
  • 3510 Views
  • 1 replies
  • 0 kudos

Accessing Databricks Delta table in ADF using system-defined managed identity

I am using Lookup activity in ADF which will read the delta table values from databricks. Currently using the system-defined managed identity of the ADF to connect Databricks delta table. I am unable to see my unity catalog database names in the look...

  • 3510 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

You are experiencing an issue in Azure Data Factory (ADF) where the Lookup activity does not show your Unity Catalog databases in the configuration dropdown, even though connectivity from ADF to Databricks is successful and you have followed all reco...

  • 0 kudos
jordanpinder
by New Contributor
  • 3553 Views
  • 1 replies
  • 0 kudos

Native geometry Parquet support

Hi there!With the recent GeoParquet 2.0 announcements, I'm curious to understand how this impacts storing geospatial data in Databricks and Delta. For reference:the Parquet specification officially adopting geospatial guidance allowing native storage...

  • 3553 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

GeoParquet 2.0’s formalization within the Apache Parquet specification is a significant step for native geospatial data storage across the modern data ecosystem, particularly for platforms like Databricks and Delta Lake. In summary, Delta Lake's reli...

  • 0 kudos
Dave_Nithio
by Contributor II
  • 3037 Views
  • 1 replies
  • 0 kudos

Preset Partner Connect Schema Changes

When using partner connect to connect Serverless Databricks to my BI tool Preset, you must manually define the schema that Preset has access to. In my case, I individually selected all databases currently in my hive_metastore:The problem is, once cre...

Dave_Nithio_0-1742329042090.png
  • 3037 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

No, there is currently no simple, direct way to add new schema access to an existing Serverless Databricks SQL warehouse connection through Partner Connect for Preset—neither through Databricks UI, BI tool configuration, nor the Databricks service pr...

  • 0 kudos
fscaravelli
by New Contributor
  • 3352 Views
  • 1 replies
  • 0 kudos

Ingest files from GCS with Auto Loader in DLT pipeline running on AWS

I have some DLT pipelines working fine ingesting files from S3. Now I'm trying to build a pipeline to ingest files from GCS using Auto Loader. I'm running Databricks on AWS.The code I have:import dlt import json from pyspark.sql.functions import col ...

  • 3352 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Your error is due to how Databricks on AWS is trying to access GCS: it's defaulting to using the GCP metadata server (which only exists on Google Cloud VMs), not the service account key you provided. This is a common issue when connecting GCS from no...

  • 0 kudos
kulasangar
by New Contributor II
  • 3043 Views
  • 1 replies
  • 0 kudos

Permission Denied while trying to update a yaml file within a python project in Databricks

I have a python project and within that I do have a yaml file. Currently i'm building the project using poetry and creating an asset bundle to deploy it in Databricks as a workflow job.So when the workflow runs, I do have an __init__.py within my ent...

kulasangar_0-1742321907764.png
  • 3043 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The main issue is that Databricks jobs typically run in environments where the file system may be read-only or restricted—especially for files packaged within the asset bundle or inside locations like /databricks/driver, /databricks/conda, or other s...

  • 0 kudos
antoniomf
by New Contributor
  • 3162 Views
  • 1 replies
  • 0 kudos

Bug Delta Live Tables - Checkpoint

Hello, I've encountered an issue with Delta Live Table in both my Development and Production Workspaces. The data is arriving correctly in my Azure Storage Account; however, the checkpoint is being stored in the path dbfs:/. I haven't modified the St...

  • 3162 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

There appears to be a recurring issue with Delta Live Table (DLT) pipelines in Databricks where the checkpoint is unexpectedly stored in the dbfs:/ path, rather than in the intended external storage location (such as Azure Blob Storage or ADLS). This...

  • 0 kudos
chirag_nagar
by New Contributor
  • 1628 Views
  • 6 replies
  • 2 kudos

Seeking Guidance on Migrating Informatica PowerCenter Workflows to Databricks using Lakebridge

Hi everyone,I hope you're doing well.I'm currently exploring options to migrate a significant number of Informatica PowerCenter workflows and mappings to Databricks. During my research, I came across Lakebridge, especially its integration with BladeB...

  • 1628 Views
  • 6 replies
  • 2 kudos
Latest Reply
dinshows15
New Contributor II
  • 2 kudos

Thansk for the update @thelogicplus  I will have look and try to connect with team of Travinto.

  • 2 kudos
5 More Replies
max_eg
by New Contributor
  • 129 Views
  • 1 replies
  • 1 kudos

Resolved! Bug in Asset Bundle Sync

I think I found a bug in the way asset bundles sync/ deploy, or at least I have a question if I understood it correctly. My Setup:I have an asset bundle, consisting of a notebook nb1.py and a utils module utils.py.nb1.py imports functions from utils....

  • 129 Views
  • 1 replies
  • 1 kudos
Latest Reply
bianca_unifeye
New Contributor II
  • 1 kudos

Hi @max_egWhat you’re seeing is expected with Asset Bundles.databricks bundle deploy computes what changed locally and only uploads those files. If you edited nb1.py in the workspace (not locally), the deploy won’t “see” a local delta for that file, ...

  • 1 kudos
Hsn
by New Contributor
  • 219 Views
  • 4 replies
  • 1 kudos

Suggest about data engineer

Hey, I'm Hasan Sayyed, currently pursuing SYBCA. I want to become a Data Engineer, but as a beginner, I’ve wasted some time learning other languages and technologies due to a lack of proper knowledge about this field. If someone could guide and teach...

  • 219 Views
  • 4 replies
  • 1 kudos
Latest Reply
bianca_unifeye
New Contributor II
  • 1 kudos

Hi Hasan. Great to see your motivation! Here’s a good way to start your journey into data engineering:Master SQL, it’s the foundation of everything in data.Enroll in the Databricks Academy (free) and take the beginner courses like “Get Started with D...

  • 1 kudos
3 More Replies
Sergecom
by New Contributor III
  • 249 Views
  • 2 replies
  • 1 kudos

Resolved! Migrating from on-premises HDFS to Unity Catalog - Looking for advice on on-prem options

Hi,We’re currently running a Databricks installation with an on-premises HDFS file system. As we’re looking to adopt Unity Catalog, we’ve realized that our current HDFS setup has limited support and compatibility with Unity Catalog.Our requirement: W...

  • 249 Views
  • 2 replies
  • 1 kudos
Latest Reply
Sergecom
New Contributor III
  • 1 kudos

Thanks very much for your detailed response — this is really helpful.You mentioned client cases where organizations have migrated from on-premises HDFS into the Databricks Unity Catalog, I’d love to learn more about those.If possible, could you share...

  • 1 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels