cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

dj4
by New Contributor
  • 82 Views
  • 2 replies
  • 1 kudos

Azure Databricks UI consuming way too much memory & laggy

This especially happens when the notebook is large with many cells. Even if I clear all the outputs scrolling the notebook is way too laggy. When I start running the code the memory consumption is 3-4GB minimum even if I am not displaying any data/ta...

  • 82 Views
  • 2 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

There are ongoing improvements being made to the Databricks notebook UI, and more to come to improve performance. However, you may be better to consider whether you would be better to break down the notebooks into smaller components, as browser memor...

  • 1 kudos
1 More Replies
bek04
by Visitor
  • 52 Views
  • 2 replies
  • 0 kudos

Serverless notebook DNS failure (gai error / name resolution)

I’m using a Databricks workspace on AWS (region: us-west-2). My Serverless notebook (CPU) cannot access any external URL — every outbound request fails at DNS resolution.Minimal test in a notebook:import urllib.requesturllib.request.urlopen("https://...

  • 52 Views
  • 2 replies
  • 0 kudos
Latest Reply
emma_s
Databricks Employee
  • 0 kudos

Hi, Here are some troubleshooting steps: 1. Network Connectivity Configuration (NCC) Confirm that the correct NCC (such as ncc_public_internet) is attached specifically to Serverless compute, not just to SQL Warehouses or other resources.After making...

  • 0 kudos
1 More Replies
confused_dev
by New Contributor II
  • 43500 Views
  • 8 replies
  • 5 kudos

Python mocking dbutils in unittests

I am trying to write some unittests using pytest, but I am coming accross the problem of how to mock my dbutils method when dbutils isn't being defined in my notebook.Is there a way to do this so that I can unit test individual functions that are uti...

  • 43500 Views
  • 8 replies
  • 5 kudos
Latest Reply
kenmyers-8451
Contributor
  • 5 kudos

If this helps anyone here is how we do this:We rely on databricks_test for injecting dbutils into the notebooks that we're testing (which is a 3rd party package mind you and hasn't been updated in a while but still works). And in our notebooks we put...

  • 5 kudos
7 More Replies
fundat
by New Contributor III
  • 43 Views
  • 4 replies
  • 1 kudos

st_point is disabled or unsupported.

On my DLT pipeline,I Installed the Databricks-mosaic libraryPhoton is activatedI'm using a workspace premium tierSELECT id, city_name, st_point(latitude, longitude) AS city_point FROM city_data ;st_point is disabled or unsupported. Co...

  • 43 Views
  • 4 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

DLT pipelines are still on runtime 16.4 which doesn't have support for st_point yet. See details here https://learn.microsoft.com/en-us/azure/databricks/release-notes/dlt/ You should be able to use st_point in normal SQL editor as long as the cluster...

  • 1 kudos
3 More Replies
sher_1222
by Visitor
  • 26 Views
  • 2 replies
  • 0 kudos

Data Ingestions errors

I was going to ingestion Data from website to databricks but it is showing Public DBFS is not enableb message. is there any other way to automate data ingestion to databricks?

  • 26 Views
  • 2 replies
  • 0 kudos
Latest Reply
emma_s
Databricks Employee
  • 0 kudos

You may need to give more info about what you're trying to do. What website are you trying to take information from and how? Are you using a notebook to pull the data? 

  • 0 kudos
1 More Replies
ismaelhenzel
by Contributor III
  • 96 Views
  • 1 replies
  • 0 kudos

Declarative Pipelines - Dynamic Overwrite

Regarding the limitations of declarative pipelines—specifically the inability to use replaceWhere—I discovered through testing that materialized views actually support dynamic overwrites. This handles several scenarios where replaceWhere would typica...

  • 96 Views
  • 1 replies
  • 0 kudos
Latest Reply
omsingh
New Contributor III
  • 0 kudos

This is a really interesting find, and honestly not something most people expect from materialized views.Under the hood, MVs in Databricks declarative pipelines are still Delta tables. So when you set partitionOverwriteMode=dynamic and partition by a...

  • 0 kudos
Ved88
by Visitor
  • 38 Views
  • 2 replies
  • 0 kudos

power BI Vnet data gateway to Databricks using import mode

we are using Power Bi Vnet data gateway and data source connection as databricks and using import mode.databricks is behind Vnet.refreshing model working fine for 400 records but larger volume throwing errors.i tried with different way kind of increm...

  • 38 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ved88
Visitor
  • 0 kudos

Hi @szymon_dybczak thanks but that is what we set when we do make power Bi desktop model ,i used this query only and made semantic model in power BI desktop and then we did Publish this into power bi service and do the refresh in web UI there it is f...

  • 0 kudos
1 More Replies
fkseki
by New Contributor III
  • 904 Views
  • 7 replies
  • 7 kudos

Resolved! List budget policies applying filter_by

I'm trying to list budget policies using the parameter "filter_by" to filter policies that start with "aaaa" but I'm getting an error  "400 Bad Request"{'error_code': 'MALFORMED_REQUEST', 'message': "Could not parse request object: Expected 'START_OB...

  • 904 Views
  • 7 replies
  • 7 kudos
Latest Reply
fkseki
New Contributor III
  • 7 kudos

Thanks for the reply, @szymon_dybczak and @lingareddy_Alva.I tried both approaches but none was successful.url = f'{account_url}/api/2.1/accounts/{account_id}/budget-policies'filter_by_json = json.dumps({"policy_name": "aaaa"})params = {"filter_by": ...

  • 7 kudos
6 More Replies
ss_data_eng
by New Contributor
  • 1392 Views
  • 4 replies
  • 0 kudos

Using Lakehouse Federation for SQL Server with Serverless Compute

Hi,My team was able to create a Foreign Catalog that connects to a SQL Server instance hosted on an Azure VM, however when trying to query the catalog, we cannot access it using serverless compute (or a serverless sql warehouse). We have tried lookin...

  • 1392 Views
  • 4 replies
  • 0 kudos
Latest Reply
Ralf
New Contributor II
  • 0 kudos

I'm trying to get something similar to work: Lakehouse Federation for Oracle with SQL warehouse serverless. We are using Azure Databricks and our Oracle DB runs on-prem. I've been able to use classic compute to query the database, but now I'd like to...

  • 0 kudos
3 More Replies
orcation
by New Contributor III
  • 1792 Views
  • 3 replies
  • 4 kudos

Resolved! Why Does Azure Databricks Consume So Much Memory When Running in the Background?

I had two Azure Databricks pages open in my browser without performing any computations. When I returned from lunch, I noticed that they were occupying about 80% of the memory in the task manager. What happened? This issue never occurred in the past,...

Snipaste_2025-09-09_13-50-09.png
  • 1792 Views
  • 3 replies
  • 4 kudos
Latest Reply
dj4
New Contributor
  • 4 kudos

@szymon_dybczak This issue still exists and is getting worse. Even a 32GB memory & ultra 7 processor laptop cannot seem to handle this issue if there are many cells in the notebook. Do you know when it'll be fixed?

  • 4 kudos
2 More Replies
siva_pusarla
by New Contributor
  • 97 Views
  • 3 replies
  • 0 kudos

workspace notebook path not recognized by dbutils.notebook.run() when running from a workflow/job

result = dbutils.notebooks.run("/Workspace/YourFolder/NotebookA", timeout_seconds=600, arguments={"param1": "value1"}) print(result)I was able to execute the above code manually from a notebook.But when i run the same notebook as a job, it fails stat...

  • 97 Views
  • 3 replies
  • 0 kudos
Latest Reply
Poorva21
New Contributor II
  • 0 kudos

@siva_pusarla , Try to convert env_setup into repo-based code and control behavior via environmentInstead of a workspace notebook, use a Python module in the repo and drive environment differences using:Job parametersBranches (dev / test / prod)Secre...

  • 0 kudos
2 More Replies
Joost1024
by New Contributor
  • 445 Views
  • 6 replies
  • 3 kudos

Read Array of Arrays of Objects JSON file using Spark

Hi Databricks Community! This is my first post in this forum, so I hope you can forgive me if it's not according to the forum best practices After lots of searching, I decided to share the peculiar issue I'm running into in this community.I try to lo...

  • 445 Views
  • 6 replies
  • 3 kudos
Latest Reply
Joost1024
New Contributor
  • 3 kudos

I guess I was a bit over enthusiastic by accepting the answer.When I run the following on the single object array of arrays (as shown in the original post) I get a single row with column "value" and value null. from pyspark.sql import functions as F,...

  • 3 kudos
5 More Replies
ndw
by New Contributor II
  • 69 Views
  • 1 replies
  • 1 kudos

Azure databricks streamlit app unity catalog access

Hi allI am developing a Databricks app. I will use Databricks asset bundles for deployment.How can I connect Databricks streamlit app into Databricks unity catalog?Where should I define the credentials? (Databricks host for dev, qa and prod environme...

  • 69 Views
  • 1 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi,  As a starter you may want to try deploying the streamlit starter app from the app UI, this will show you the pattern to connect and pull data into your streamlit app. The following then gives some best practise guidelines on your questions: 1. U...

  • 1 kudos
liquibricks
by New Contributor III
  • 78 Views
  • 3 replies
  • 3 kudos

Resolved! Comments not updating on a SDP streaming table

We have a pipeline in a job which dynamically creates a set of streaming tables based on a list of kafka topics like this:       # inside a loop      @DP.table(name=table_name, comment=markdown_info)      def topic_flow(topic_name=topic_name):       ...

  • 78 Views
  • 3 replies
  • 3 kudos
Latest Reply
liquibricks
New Contributor III
  • 3 kudos

Ah, my code is correct. There was just a mistake further up when producing the comments that lead me down the wrong path. Comments (and metadata) are correctly updated as expected!

  • 3 kudos
2 More Replies
Neeraj_432
by New Contributor
  • 106 Views
  • 3 replies
  • 1 kudos

Resolved! while loading data from dataframe to spark sql table using .saveAstable() option, not working.

hi , i am loading dataframe data into spark sql table using .saveastable() option.. scema is matching..but column names are diffirent in sql table. is it necessary to maintain the same column names in source and target ? how to handle it in real time...

  • 106 Views
  • 3 replies
  • 1 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 1 kudos

If your pipeline is mostly PySpark/Scala, rename columns in the DataFrame to match the target and use df.write.saveAsTable. If your pipeline is mostly SQL (e.g., on SQL Warehouses), use INSERT … BY NAME from a temp view (or table).Performance is broa...

  • 1 kudos
2 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels