cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

abhijit007
by New Contributor
  • 2065 Views
  • 1 replies
  • 1 kudos

Resolved! Lakebridge code conversion | Permission issue

Hi,I’ve successfully installed the transpile module from Lakebridge and tried the tool to convert Informatica mappings into PySpark code. However, I’m encountering a PermissionError during execution. I’ve provided the relevant environment details and...

Data Engineering
Lakebridge
Warehouse Migration
  • 2065 Views
  • 1 replies
  • 1 kudos
Latest Reply
dkushari
Databricks Employee
  • 1 kudos

Hi @abhijit007 - I see that this has been resolved in the 0.10.5 release. Can you please retest and confirm?

  • 1 kudos
AlexSantiago
by New Contributor II
  • 14377 Views
  • 20 replies
  • 4 kudos

spotify API get token - raw_input was called, but this frontend does not support input requests.

hello everyone, I'm trying use spotify's api to analyse my music data, but i'm receiving a error during authentication, specifically when I try get the token, above my code.Is it a databricks bug?pip install spotipyfrom spotipy.oauth2 import SpotifyO...

  • 14377 Views
  • 20 replies
  • 4 kudos
Latest Reply
Alyceveum25
New Contributor II
  • 4 kudos

Thank you 

  • 4 kudos
19 More Replies
raghvendrarm1
by New Contributor
  • 257 Views
  • 2 replies
  • 3 kudos

Resolved! Results from the spark application to driver

I tried to read many articles but still not clear on this:The executors complete the execution of tasks and have the results with them.1. The results(output data) from all executors is transported to driver in all cases or executors persist it if tha...

  • 257 Views
  • 2 replies
  • 3 kudos
Latest Reply
K_Anudeep
Databricks Employee
  • 3 kudos

Hello @raghvendrarm1  ,   Below are the answers to your questions: Do executors always send “results” to the driver? No. Only actions that return values (e.g., collect, take, first, count) bring data back to the driver. collect explicitly “returns al...

  • 3 kudos
1 More Replies
Saf4Databricks
by New Contributor III
  • 418 Views
  • 3 replies
  • 2 kudos

Resolved! Cannot import pyspark.pipelines module

Question: What could be a cause of the following error of my code in a Databricks notebook, and how can we fix the error? I'm using latest Free Edition of Databricks that has runtime version 17.2 and PySpark version 4.0.0.Error:ImportError: cannot im...

  • 418 Views
  • 3 replies
  • 2 kudos
Latest Reply
dkushari
Databricks Employee
  • 2 kudos

Hi @Saf4Databricks - Are you trying to use it from a standalone Databricks notebook? You should only use it from with Lakeflow Declarative Pipeline (LDP). The link you shared is about LDP. Here is an example where I used it.    

  • 2 kudos
2 More Replies
TalessRocha
by New Contributor II
  • 1499 Views
  • 10 replies
  • 8 kudos

Resolved! Connect to azure data lake storage using databricks free edition

Hello guys, i'm using databricks free edition (serverless) and i am trying to connect to a azure data lake storage.The problem I'm having is that in the free edition we can't configure the cluster so I tried to make the connection via notebook using ...

  • 1499 Views
  • 10 replies
  • 8 kudos
Latest Reply
BS_THE_ANALYST
Esteemed Contributor III
  • 8 kudos

@TalessRocha thanks for getting back to us! Glad to hear you got it working, that's awesome. Best of luck with your projects.All the best,BS

  • 8 kudos
9 More Replies
Malthe
by Contributor II
  • 474 Views
  • 4 replies
  • 1 kudos

Resolved! Can't enable "variantType-preview" using DLTs

Using create_streaming_table and passing table properties as follows, I get an error running the pipeline for the first time:> Your table schema requires manually enablement of the following table feature(s): variantType-preview.I'm using this code:c...

  • 474 Views
  • 4 replies
  • 1 kudos
Latest Reply
Malthe
Contributor II
  • 1 kudos

There's a workaround available in most situations which is to first create the table without the VARIANT column, run the pipeline at least once, and then add the column in a subsequent refresh.

  • 1 kudos
3 More Replies
Upendra_Dwivedi
by Contributor
  • 2639 Views
  • 1 replies
  • 1 kudos

Resolved! Databricks APP OBO User Authorization

Hi All,We are using on-behalf of user authorization method for our app and the x-forwarded-access-token is expiring after sometime and we have to redeploy our app to rectify the issue. I am not sure what is the issue or how we can keep the token aliv...

Upendra_Dwivedi_0-1747911721728.png
  • 2639 Views
  • 1 replies
  • 1 kudos
Latest Reply
jamesl
Databricks Employee
  • 1 kudos

Hi @Upendra_Dwivedi , are you still facing this issue? The x-forwarded-access-token your app receives is the current user’s access token that Databricks forwards in HTTP headers for on‑behalf‑of‑user access. You should read it from the request on eac...

  • 1 kudos
Mous92i
by New Contributor II
  • 325 Views
  • 3 replies
  • 2 kudos

Resolved! Liquid Clustering With Merge

Hello I’m facing severe performance issues with a  merge into databricksmerge_condition = """ source.data_hierarchy = target.data_hierarchy AND source.sensor_id = target.sensor_id AND source.timestamp = target.timestamp """The target Delt...

  • 325 Views
  • 3 replies
  • 2 kudos
Latest Reply
Mous92i
New Contributor II
  • 2 kudos

Thanks for your response

  • 2 kudos
2 More Replies
databricksero
by New Contributor II
  • 579 Views
  • 8 replies
  • 4 kudos

DLT pipeline fails with “can not infer schema from empty dataset” — works fine when run manually

Hi everyone,I’m running into an issue with a Delta Live Tables (DLT) pipeline that processes a few transformation layers (raw → intermediate → primary → feature).When I trigger the entire pipeline, it fails with the following error:can not infer sche...

  • 579 Views
  • 8 replies
  • 4 kudos
Latest Reply
ManojkMohan
Honored Contributor
  • 4 kudos

@databricksero  Explicit Schema Definition: When calling spark.createDataFrame(pdf_cleaned), explicitly provide the schema even if the DataFrame is empty. This helps Spark infer the types and prevents the “cannot infer schema from empty dataset” erro...

  • 4 kudos
7 More Replies
deng_dev
by New Contributor III
  • 11343 Views
  • 1 replies
  • 0 kudos

py4j.protocol.Py4JJavaError: An error occurred while calling o359.sql. : java.util.NoSuchElementExce

Hi!We are creating table in streaming job every micro-batch using spark.sql('create or replace table ... using delta as ...') command. This query includes combining data from multiple tables.Sometimes our job fails with error:py4j.Py4JException: An e...

  • 11343 Views
  • 1 replies
  • 0 kudos
Latest Reply
sahilchavan
New Contributor II
  • 0 kudos

Hi @deng_dev ,Did you discover any way to raise this error gracefully? I'm facing the same error when running the kinesis stream. Although I'm aware of what the error is but my intent is to raise and log the error gracefully 

  • 0 kudos
Bhavana_Y
by New Contributor
  • 224 Views
  • 1 replies
  • 1 kudos

Resolved! Learning Path for Spark Developer Associate

Hello Everyone,Happy for being a part of Virtual Journey !!Enrolled in Associate Spark Developer and completed learning path in Databricks Academy. Can anyone please confirm is completing learning path enough for obtaining 50% off voucher for certifi...

Screenshot (15).png
  • 224 Views
  • 1 replies
  • 1 kudos
Latest Reply
Advika
Databricks Employee
  • 1 kudos

Hello @Bhavana_Y! To be eligible for the incentives, you’ll need to complete one of the pathways mentioned in the Learning Festival post. Based on your screenshot, it looks like you’ve completed all four modules of LEARNING PATHWAY 7: APACHE SPARK DE...

  • 1 kudos
donlxz
by New Contributor III
  • 387 Views
  • 4 replies
  • 3 kudos

Resolved! deadlock occurs with use statement

When issuing a query from Informatica using a Delta connection, the statement use catalog_name.schema_name is executed first. At that time, the following error appeared in the query history:Query could not be scheduled: (conn=5073499)Deadlock found w...

  • 387 Views
  • 4 replies
  • 3 kudos
Latest Reply
donlxz
New Contributor III
  • 3 kudos

I’ll try making adjustments on the Informatica side.Thank you for your help.

  • 3 kudos
3 More Replies
mikvaar
by New Contributor III
  • 1072 Views
  • 8 replies
  • 7 kudos

Resolved! DAB + DLT destroy fails due to ownership/permissions mismatch

Hi all,We are running into an issue with Databricks Asset Bundles (DAB) when trying to destroy a DLT pipeline. Setup is as follows:Two separate service principals:Deployment SP: used by Azure DevOps for deploying bundles.Run_as SP: used for running t...

Data Engineering
Databricks
Databricks Asset Bundles
DevOps
  • 1072 Views
  • 8 replies
  • 7 kudos
Latest Reply
denis-dbx
Databricks Employee
  • 7 kudos

We just released https://github.com/databricks/cli/releases/tag/v0.273.0 with a mitigation for this, the error should disappear if you upgrade. Please try and let us know how it goes. Terraform fix is in https://github.com/databricks/terraform-provid...

  • 7 kudos
7 More Replies
Dimitry
by Contributor III
  • 169 Views
  • 1 replies
  • 0 kudos

Serverless - can't parallelize UDF in applyInPandas

HI allServerless V3 solved an error of mismatching python versions between driver and worker which I had on V2 (can't remember the exact wording).So I'd been running this on classic compute so far.Today I tried on serverless to a partial success - un...

Dimitry_1-1760679790069.png Dimitry_2-1760679824765.png
  • 169 Views
  • 1 replies
  • 0 kudos
Latest Reply
Dimitry
Contributor III
  • 0 kudos

I was wrong in interpreting the results. threading.get_native_id() does not work on serverless as on classic, so different threads return the same ID. The time it takes to execute the test is obviously less than 40 seconds, if it was running on a sin...

  • 0 kudos
bunny1174
by New Contributor
  • 205 Views
  • 2 replies
  • 1 kudos

Spark Streaming Loading 1kto 5k rows only delta table

Hi Team,I have 4-5 millions of files in s3 files around 1.5gb data only with 9 million records, when i try to use autoloader to read the data using read stream and writing to delta table the processing is taking too much time, it is loading from 1k t...

  • 205 Views
  • 2 replies
  • 1 kudos
Latest Reply
Prajapathy_NKR
New Contributor II
  • 1 kudos

@bunny1174 It is a common issue that small files gets created during streaming. Since you are using delta file format, I would suggest two solutions,1. try using Liquid clustering. This does auto compact of small files into a bigger chuck mostly of 1...

  • 1 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels