cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

dream
by Contributor
  • 7050 Views
  • 1 replies
  • 2 kudos

Comparing schemas of two dataframes

So I was comparing schemas of two different dataframe using this code: >>> df1.schema == df2.schema Out: False But the thing is, both the schemas are completely equal.When digging deeper I realized that some of the StructFields() that should have bee...

  • 7050 Views
  • 1 replies
  • 2 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 2 kudos

Hi @dream ,In this case, you can go with dataframe.dtypes for comparing the schema or datatypes for two dataframeMetadata store information about column properties

  • 2 kudos
PaulStuart
by New Contributor
  • 3412 Views
  • 1 replies
  • 1 kudos

Resolved! "Can't login to databricks socket is closed" when using vsCode Extension

hello there.  I am experiencing a problem using the Databricks Extension with Visual Studio Code, and I wonder if anyone else has experienced this.First, I have installed the databricks cli, and configured some profiles using tokens.  Those profiles ...

  • 3412 Views
  • 1 replies
  • 1 kudos
Latest Reply
nkls
New Contributor III
  • 1 kudos

I finally solved it!I had the same error code as you.Running Databricks Extension v1.1.1, vscode 1.79 on Windows 10.I'm behind a company proxy and the main issue was that vscode didn't have proxy support enabled as default.Adding this to my settings....

  • 1 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 2493 Views
  • 1 replies
  • 1 kudos

Introducing 'Run-If' Feature in Databricks Jobs API for Efficient Task Failure Managemen

Databricks Jobs API now includes a 'run-if' feature for task creation in workflows. This upgrade enables the execution of repair jobs in scenarios where one or all tasks fail. 

ezgif-5-b4915a74cf.gif
  • 2493 Views
  • 1 replies
  • 1 kudos
Latest Reply
Benjaminfinch
New Contributor II
  • 1 kudos

Hello, Databricks Jobs API has been updated to include a 'run-if' feature for task creation in workflows. This new feature allows the system to execute repair jobs when one or more tasks fail, enhancing the robustness and reliability of workflows by ...

  • 1 kudos
piterpan
by New Contributor III
  • 6491 Views
  • 8 replies
  • 11 kudos

Resolved! _sqldf not defined on Azure job cluster v12.2

Since yesterday we have errors in notebooks that were previously working.  NameError: name '_sqldf' is not defined  It was working previously.We are on Azure databricks, usng job pool Driver: Standard_D4s_v5 · Workers: Standard_D4s_v5 · 1-6 workers ·...

Data Engineering
azure
Notebook
pyspark
  • 6491 Views
  • 8 replies
  • 11 kudos
Latest Reply
Tharun-Kumar
Databricks Employee
  • 11 kudos

@piterpan This was a regression issue which impacted the jobs where _sqldf was referenced and the notebook those weren't run interactively. Our Engineering team has fixed this issue yesterday.Could you check whether you are still facing the issue?

  • 11 kudos
7 More Replies
marianopenn
by New Contributor III
  • 11499 Views
  • 6 replies
  • 4 kudos

Resolved! [UDF_MAX_COUNT_EXCEEDED] Exceeded query-wide UDF limit of 5 UDFs

We are using DLT to ingest data into our Unity catalog and then, in a separate job, we are reading and manipulating this data and then writing it to a table like:df.write.saveAsTable(name=target_table_path)We are getting an error which I cannot find ...

Data Engineering
data engineering
dlt
python
udf
Unity Catalog
  • 11499 Views
  • 6 replies
  • 4 kudos
Latest Reply
Tharun-Kumar
Databricks Employee
  • 4 kudos

@AlexPrev You can traverse to the Advanced Settings in the Cluster configuration and include this config in the Spark section.

  • 4 kudos
5 More Replies
Atifdatabricks
by New Contributor II
  • 1560 Views
  • 2 replies
  • 1 kudos

Suspended - Databricks Certified Associate Developer for Apache Spark

During middle of the exam I got suspended. It said due to my eye movement. I had the test on left part of my monitor and pdf (which was provided as a testing aid for this exam) on right side. I was just moving my eyes left and right as I was using PD...

  • 1560 Views
  • 2 replies
  • 1 kudos
Latest Reply
Atifdatabricks
New Contributor II
  • 1 kudos

My request number is 00353935

  • 1 kudos
1 More Replies
Rishi045
by New Contributor III
  • 13292 Views
  • 11 replies
  • 0 kudos

Data getting missed while reading from azure event hub using spark streaming

Hi All,I am facing an issue of data getting missed.I am reading the data from azure event hub and after flattening the json data I am storing it in a parquet file and then using another databricks notebook to perform the merge operations on my delta ...

Data Engineering
Azure event hub
Spark streaming
  • 13292 Views
  • 11 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 0 kudos

- In the EventHub, you can preview the event hub job using Azure Analitycs, so please first check are all records there- Please set in Databricks that it is saved directly to the bronze delta table without performing any aggregation, just 1 to 1, and...

  • 0 kudos
10 More Replies
ThomasVanBilsen
by New Contributor III
  • 1570 Views
  • 1 replies
  • 1 kudos

Catalog name's in DTAP scenario

Hi everyone,I'm currently in the process of migrating to Unity Catalog. I have several Azure Databricks Workspaces, one for each phase of the development phase (development, test, acceptance, and production). In accordance with the best practices (ht...

Data Engineering
DTAP
Unity Catalog
  • 1570 Views
  • 1 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

you could also store the environment name in a config file f.e. in the databricks filestore.These config files can also be managed by ci/cd.tbh my preferred way of working lately.

  • 1 kudos
sparkstreaming
by New Contributor III
  • 5606 Views
  • 5 replies
  • 4 kudos

Resolved! Missing rows while processing records using foreachbatch in spark structured streaming from Azure Event Hub

I am new to real time scenarios and I need to create a spark structured streaming jobs in databricks. I am trying to apply some rule based validations from backend configurations on each incoming JSON message. I need to do the following actions on th...

  • 5606 Views
  • 5 replies
  • 4 kudos
Latest Reply
Rishi045
New Contributor III
  • 4 kudos

Were you able to achieve any solutions if yes please can you help with it.

  • 4 kudos
4 More Replies
DipsikhaDas
by New Contributor II
  • 1366 Views
  • 1 replies
  • 1 kudos

Databricks notebook exceptions into Service Now

Hello Community members,I am looking for options for redirecting the Databricks notebook raised except within exception block to be redirected to ServiceNowIs there a way the connection can be made directly from the notebook?Looking for suggestions. ...

  • 1366 Views
  • 1 replies
  • 1 kudos
Latest Reply
DipsikhaDas
New Contributor II
  • 1 kudos

Thank you for the solution, I will definitely try this and share to the community if this works.

  • 1 kudos
adivandhya
by New Contributor III
  • 2478 Views
  • 3 replies
  • 4 kudos

configuration for Job Queueing in Terraform

When defining the databricks_job resource in Terraform , we are trying to enable Job Queueing flag for the job. However, from the Terraform Provider docs, we are not able to find any config related to queuing. Is there a different method to configure...

  • 2478 Views
  • 3 replies
  • 4 kudos
Latest Reply
adivandhya
New Contributor III
  • 4 kudos

I've created a Feature Request for this in Github - https://github.com/databricks/terraform-provider-databricks/issues/2531

  • 4 kudos
2 More Replies
HasiCorp
by New Contributor II
  • 12835 Views
  • 3 replies
  • 2 kudos

Resolved! AnalysisException: [RequestId=... ErrorClass=INVALID_PARAMETER_VALUE] Missing cloud file system scheme

Hi community,i get an analysis exception when executing following code in a notebook using a personal compute cluster. Seems to be an issue with permission but I am logged in with my admin account. Any help would be appreciated. USE CATALOG catalog; ...

  • 12835 Views
  • 3 replies
  • 2 kudos
Latest Reply
Leonardo
New Contributor III
  • 2 kudos

I was having the same issue because I was trying to set the location with the absolute path, just like you did.I solved it by creating an external location, then copying the URL and putting it into the location of the path options.

  • 2 kudos
2 More Replies
Oliver_Angelil
by Valued Contributor II
  • 10834 Views
  • 6 replies
  • 6 kudos

In what circumstances are both UAT/DEV and PROD environments actually necessary?

I wanted to ask this Q yesterday in the Q&A session with Mohan Mathews, but didn't get around to it (@Kaniz Fatma​ do you know his handle here so I can tag him?)We (and most development teams) have two environments: UAT/DEV and PROD. For those that d...

  • 10834 Views
  • 6 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @Oliver Angelil​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Tha...

  • 6 kudos
5 More Replies
felix_counter
by New Contributor III
  • 3669 Views
  • 3 replies
  • 3 kudos

Resolved! Order of delta table after read not as expected

Dear Databricks Community,I am performing three consecutive 'append' writes to a delta table, whereas the first append creates the table. Each append consists of two rows, which are ordered by column 'id' (see example in the attached screenshot). Whe...

  • 3669 Views
  • 3 replies
  • 3 kudos
Latest Reply
felix_counter
New Contributor III
  • 3 kudos

Thanks a lot @Lakshay and @Tharun-Kumar for your valued contributions!

  • 3 kudos
2 More Replies
DB_PROD_Molina
by New Contributor
  • 1582 Views
  • 2 replies
  • 3 kudos

Job aborted due to stage failure. Relative path in absolute URI

Hello Team we have frequently data bricks job failure with following message , any help would be appreciated Job aborted due to stage failure. Relative path in absolute URI

  • 1582 Views
  • 2 replies
  • 3 kudos
Latest Reply
Tharun-Kumar
Databricks Employee
  • 3 kudos

@DB_PROD_Molina One of the reasons this error shows up is due to file path/name containing special characters in it. If that is the case, could you rename your file to have the special characters removed.

  • 3 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels