cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

shubham_007
by Contributor II
  • 40 Views
  • 3 replies
  • 0 kudos

Need urgent help and guidance on information/details with reference links on below topics:

Dear experts,I need urgent help and guidance on information/details with reference links on below topics:Steps on Package Installation with Serverless in Databricks.What are Delta Lake Connector with serverless ? How to run Delta Lake queries outside...

  • 40 Views
  • 3 replies
  • 0 kudos
Latest Reply
brockb
Databricks Employee
  • 0 kudos

Hi @shubham_007 , Thanks for the question. You can install libraries with Serverless Compute using an "Environment". This also provides the option to configure an environment specification into a YAML file. More detail can be found here: https://docs...

  • 0 kudos
2 More Replies
maddan80
by New Contributor
  • 5 Views
  • 0 replies
  • 0 kudos

History load from Source and

Hi As part of our requirement we wanted to load a huge historical data from the Source System to Databricks in Bronze and then process it to Gold, We wanted to use batch with read and Write so that the historical load is done and then for the delta o...

  • 5 Views
  • 0 replies
  • 0 kudos
MAHANK
by Visitor
  • 42 Views
  • 2 replies
  • 0 kudos

How to compare two databricks notebooks which are in different folders? note we dont have GIT setup

we would to like compare two notebooks which are in different folders , we are yet set up a GIT repo for these folders.?what are the other options we have to compare two notebooks?thanksNAnda  

  • 42 Views
  • 2 replies
  • 0 kudos
Latest Reply
arekmust
New Contributor II
  • 0 kudos

Hi @MAHANK,If the Repos are not an option for you at the moment, you can use Visual Studio Code to compare 2 files. It's pretty straightforward. All you have to do after installing VS Code is to download both files to your local environment, open the...

  • 0 kudos
1 More Replies
ggsmith
by New Contributor III
  • 6 Views
  • 0 replies
  • 0 kudos

Workflow SQL Task Query Showing Empty

I am trying to create a SQL task in Workflows. I have my query which executes successfully in the SQL editor, and it is saved in a repo.However, when I try to execute the task, the below error shows.Query text can not be empty: BAD_REQUEST: Query tex...

ggsmith_0-1738014329449.png ggsmith_1-1738014420683.png ggsmith_2-1738014505322.png
  • 6 Views
  • 0 replies
  • 0 kudos
MatthewMills
by New Contributor III
  • 219 Views
  • 2 replies
  • 6 kudos

DLT Apply Changes Tables corrupt

Got a weird DLT error.Test harness using the new(ish) 'Apply Changes from Snapshot' Functionality and DLT Serverless (Current Channel). Azure Aus East Region.Has been working for several months without issue - but within the last week these DLT table...

Data Engineering
Apply Changes From Snapshot
dlt
  • 219 Views
  • 2 replies
  • 6 kudos
Latest Reply
mjbobak
Contributor
  • 6 kudos

We have the same error. It does not seem to be related to the Current or Preview runtimes in the DLT settings. region: Azure East US 2For us, the pipeline completes successfully but the corresponding connection to the underlying data referenced from ...

  • 6 kudos
1 More Replies
TomBrick
by New Contributor
  • 44 Views
  • 2 replies
  • 0 kudos

Linux ODBC driver Unknown error

Hi,I'm trying to debug an issue connecting to Azure Databricks from a CentOS 7 machine. Testing on my own machine only required unixODBC, the databricks-odbc driver and the connection string which all worked fine. When I test from the CentOS 7 machin...

  • 44 Views
  • 2 replies
  • 0 kudos
Latest Reply
AlliaKhosla
Databricks Employee
  • 0 kudos

@TomBrick Greetings! Are you trying to establish a connection from a VM on Azure? Can you add below logging and add the log file here. On MacOS and Linux, the odbc.ini file will need to be used to set the logging parameters at the DSN level. Linux: /...

  • 0 kudos
1 More Replies
shubham_007
by Contributor II
  • 28 Views
  • 1 replies
  • 0 kudos

Urgent !! Need information/details and reference link on below two topics:

Dear experts,I need urgent help and guidance on information/details with reference links on below topics:Steps on Package Installation with Serverless in Databricks.What are Delta Lake Connector with serverless ? How to run Delta Lake queries outside...

  • 28 Views
  • 1 replies
  • 0 kudos
Latest Reply
brockb
Databricks Employee
  • 0 kudos

Seems like a duplicate: https://community.databricks.com/t5/data-engineering/urgent-need-information-details-and-reference-link-on-below-two/td-p/107260

  • 0 kudos
mrkure
by Visitor
  • 28 Views
  • 1 replies
  • 0 kudos

Databricks connect, set spark config

Hi, Iam using databricks connect to compute with databricks cluster. I need to set some spark configurations, namely spark.files.ignoreCorruptFiles. As I have experienced, setting spark configuration in databricks connect for the current session, has...

  • 28 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Have you tried setting it up in your code as: from pyspark.sql import SparkSession # Create a Spark session spark = SparkSession.builder \ .appName("YourAppName") \ .config("spark.files.ignoreCorruptFiles", "true") \ .getOrCreate() # Yo...

  • 0 kudos
rrajan
by New Contributor
  • 24 Views
  • 1 replies
  • 0 kudos

Urgent Help Needed - Databricks Notebook Failure Handle for Incremental Processing

I have created a notebook which helps in creating three different gold layer objects from one single silver table. All these tables are processed incremently. I want to develop the failure handling scenario in case if the pipeline fails after loading...

  • 24 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

To handle the scenario where your pipeline fails after loading some records into the first gold table or if one gold table loads successfully while the second fails, you can implement a failure handling mechanism that ensures already inserted records...

  • 0 kudos
rpshgupta
by New Contributor III
  • 413 Views
  • 10 replies
  • 2 kudos

How to find the source code for the data engineering learning path?

Hi Everyone,I am taking data engineering learning path in customer-academy.databricks.com . I am not able to find any source code attached to the course. Can you please help me to find it so that I can try hands on as well ?ThanksRupesh

  • 413 Views
  • 10 replies
  • 2 kudos
Latest Reply
ogramos
New Contributor II
  • 2 kudos

Hello folks,I also opened a ticket with Databricks Academy, and it seems that Partner Learning doesn't include the code anymore. You need a Databricks labs subscription.Quote: "Are you referring to the labs that are not available?If so, We are sorry ...

  • 2 kudos
9 More Replies
data-grassroots
by New Contributor III
  • 3955 Views
  • 7 replies
  • 1 kudos

Resolved! Ingesting Files - Same file name, modified content

We have a data feed with files whose filenames stays the same but the contents change over time (brand_a.csv, brand_b.csv, brand_c.csv ....).Copy Into seems to ignore the files when they change.If we set the Force flag to true and run it, we end up w...

  • 3955 Views
  • 7 replies
  • 1 kudos
Latest Reply
data-grassroots
New Contributor III
  • 1 kudos

Thanks for the validation, Werners! That's the path we've been heading down (copy + merge). I still have some DLT experiments planned but - at least for this situation - copy + merge works just fine.

  • 1 kudos
6 More Replies
zg
by Visitor
  • 21 Views
  • 1 replies
  • 0 kudos

Unable to Create Alert Using API

Hi All, I'm trying to create an alert using the Databricks REST API, but I keep encountering the following error:Error creating alert: 400 {"message": "Alert name cannot be empty or whitespace"}:{"alert": {"seconds_to_retrigger": 0,"display_name": "A...

  • 21 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @zg, Could you please share the REST API Endpoint you are making the request to?

  • 0 kudos
Kayla
by Valued Contributor II
  • 416 Views
  • 7 replies
  • 5 kudos

New error: middleware.base:exception while intercepting server message

We started getting a very weird error at random from Databricks. This is from cells that routinely work, and after it happens once it will happen on every cell. It appears to be including full text of a .py file we're importing, that I've had to remo...

  • 416 Views
  • 7 replies
  • 5 kudos
Latest Reply
AnshulJain
Visitor
  • 5 kudos

Getting Same issue with Runtime 16.1.

  • 5 kudos
6 More Replies
sparkplug
by New Contributor III
  • 33 Views
  • 2 replies
  • 1 kudos

Databricks logging of SQL queries to DBFS

HiOur costs has suddenly spiked due to logging of a lot of SQL query outputs to DBFS. We haven't made any changes to enable this. How can we disable this feature?

  • 33 Views
  • 2 replies
  • 1 kudos
Latest Reply
sparkplug
New Contributor III
  • 1 kudos

I don't get any output when running the following, I have the destination set to dbfs . But it was only supposed to be for cluster logs and not for query execution outputs to be stored in DBFS. Any idea if this is expected behavior.spark.conf.get("sp...

  • 1 kudos
1 More Replies
peter_ticker
by New Contributor
  • 162 Views
  • 17 replies
  • 2 kudos

XML Auto Loader rescuedDataColumn Doesn't Rescue Array Fields

Hiya! I'm interested whether anyone has a solution to the following problem. If you load XML using Auto Loader or otherwise and set the schema to be such that a single value is assumed for a given xpath but the actual XML contains multiple values (i....

  • 162 Views
  • 17 replies
  • 2 kudos
Latest Reply
Witold
Honored Contributor
  • 2 kudos

Let me rephrase it. You can't use Message as the rowTag, because it's the root element. rowTag implies that it's a tag within the root element, which might occur multiple times. Check the docs around reading and write XML files, there you'll find exa...

  • 2 kudos
16 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels