cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

KKo
by Contributor III
  • 19786 Views
  • 3 replies
  • 2 kudos

Resolved! Union Multiple dataframes in loop, with different schema

With in a loop I have few dataframes created. I can union them with out an issue if they have same schema using (df_unioned = reduce(DataFrame.unionAll, df_list). Now my problem is how to union them if one of the dataframe in df_list has different nu...

  • 19786 Views
  • 3 replies
  • 2 kudos
Latest Reply
anoopunni
New Contributor II
  • 2 kudos

Hi,I have come across same scenario, using reduce() and unionByname we can implement the solution as below:val lstDF: List[Datframe] = List(df1,df2,df3,df4,df5)val combinedDF = lstDF.reduce((df1, df2) => df1.unionByName(df2, allowMissingColumns = tru...

  • 2 kudos
2 More Replies
VikashKumar
by New Contributor
  • 7252 Views
  • 0 replies
  • 0 kudos

Is there any way to convert delta share short-lived presigned URLs to CSV files at Client End

Hello All, I have requirement , where I need to disclose the data at the client end and they are suppose to access the data in CSV format. I am planning to use Delta Sharing integrated with Unity Catalog. As we know, according to Delta sharing protoc...

  • 7252 Views
  • 0 replies
  • 0 kudos
180122
by New Contributor II
  • 2736 Views
  • 3 replies
  • 1 kudos

Data Engineering Professional - Practice exam?

Hi, when will we get Practice Exams for this the Data Engineering Professional Certification Exam? It seems like we already have it for a good amount of the associate exams, and this Professional exam seems more difficult than the associate ones, so ...

  • 2736 Views
  • 3 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @180122  Hope you are well. Just wanted to see if you were able to find an answer to your question and would you like to mark an answer as best? It would be really helpful for the other members too. Cheers!

  • 1 kudos
2 More Replies
bradleyjamrozik
by New Contributor III
  • 4653 Views
  • 3 replies
  • 3 kudos

Resolved! Questions about Lineage and DLT

Hey there!1. Does column lineage work across multiple catalogs and schemas?2. Do Delta Live Tables support lineage? If yes does that work across multiple pipelines or only with a single one?

  • 4653 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @bradleyjamrozik  We haven't heard from you since the last response from @Vinay_M_R and @erigaud , and I was checking back to see if her suggestions helped you. Or else, If you have any solution, please share it with the community, as it can be he...

  • 3 kudos
2 More Replies
YS1
by Contributor
  • 2495 Views
  • 3 replies
  • 1 kudos

Updating tables from SQL Server to Databricks

Hi,I have SQL Server tables which are the primary location for all live transactions happen and currently I read them through pyspark as dataframes and overwrite them everyday to have the latest copy of them in Databricks. The problem is it takes lon...

  • 2495 Views
  • 3 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @YS1  Hope you are well. Just wanted to see if you were able to find an answer to your question and would you like to mark an answer as best? It would be really helpful for the other members too. Cheers!

  • 1 kudos
2 More Replies
samuraidjakk
by New Contributor II
  • 2244 Views
  • 2 replies
  • 1 kudos

Resolved! Lineage from Unity Catalog on GCP

We are in the prosess of trying to do a PoC of our pipelines using DLT. Normally, we use another tool and we have created a custom program to extract lineage. We want to try to get / display lineage using Unity Catalog.But.. we are on GCP, and it see...

  • 2244 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @samuraidjakk  Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best? If not, please tell us so we can help you. Thanks!

  • 1 kudos
1 More Replies
SRK
by Contributor III
  • 11117 Views
  • 6 replies
  • 5 kudos

Resolved! How to deploy Databricks SQL queries and SQL Alerts from lower environment to higher environment?

We are using Databricks SQL Alerts to handle one scenario. We have written the queries for the same, also we have created the SQL Alert. However, I was looking for the best way to deploy it on Higher Environments like Pre-Production and Production.I ...

  • 11117 Views
  • 6 replies
  • 5 kudos
Latest Reply
valeryuaba
New Contributor III
  • 5 kudos

Thanks!

  • 5 kudos
5 More Replies
erigaud
by Honored Contributor
  • 13167 Views
  • 7 replies
  • 5 kudos

Resolved! Autoloader Excel Files

Hello, I looked at the documentation but could not find what I wanted. Is there a way to load Excel files using an autoloader and if yes, what options should be given to specify format, sheet name etc ? Thank you friends !

  • 13167 Views
  • 7 replies
  • 5 kudos
Latest Reply
Hemant
Valued Contributor II
  • 5 kudos

Unfortunately, Databricks autoloader doesn't support Excel file types to incrementally load new files.Link:https://docs.databricks.com/ingestion/auto-loader/options.html If your excel file contains a single sheet then there is a workaround.

  • 5 kudos
6 More Replies
sumit23
by New Contributor
  • 1953 Views
  • 0 replies
  • 0 kudos

[Error] [SECRET_FUNCTION_INVALID_LOCATION]: While running secret function with create or replace

Hi, recently we made an upgrade to our databricks warehouse, transitioning from SQL Classic to SQL PRO.However, we started encountering the following error message when attempting to execute the "CREATE or REPLACE" table query with the secret functio...

  • 1953 Views
  • 0 replies
  • 0 kudos
BasavarajAngadi
by Contributor
  • 5206 Views
  • 4 replies
  • 1 kudos

Resolved! Question on Transaction logs and versioning in data bricks ?

Hi Experts ,No doubt data bricks supports ACID properties. What when it comes to versioning how much such versions will data bricks captures ? For Example : If i do any DML operations on top of Delta tables every time when i do it captures the tran...

  • 5206 Views
  • 4 replies
  • 1 kudos
Latest Reply
stefnhuy
New Contributor III
  • 1 kudos

Hey,As a data enthusiast myself, I find this topic quite intriguing. Data Bricks indeed does a fantastic job in supporting ACID properties, ensuring data integrity, and allowing for versioning.To address BasavarajAngadi's question, Data Bricks effici...

  • 1 kudos
3 More Replies
hamzatazib96
by New Contributor III
  • 91499 Views
  • 21 replies
  • 12 kudos

Resolved! Read file from dbfs with pd.read_csv() using databricks-connect

Hello all, As described in the title, here's my problem: 1. I'm using databricks-connect in order to send jobs to a databricks cluster 2. The "local" environment is an AWS EC2 3. I want to read a CSV file that is in DBFS (databricks) with pd.read_cs...

  • 91499 Views
  • 21 replies
  • 12 kudos
Latest Reply
so16
New Contributor II
  • 12 kudos

Please guys I need your help, I got the same issue still after readed all your comments.I am using Databricks-connect(version 13.1) on pycharm and trying to load file that are on the dbfs storage.spark = DatabricksSession.builder.remote( host=host...

  • 12 kudos
20 More Replies
dataengineer17
by New Contributor II
  • 26178 Views
  • 5 replies
  • 3 kudos

Databricks execution failed with error state: InternalError, error message: failed to update run

I am receiving this error Databricks execution failed with error state: InternalError, error message: failed to update run GlobalRunId(xx,RunId(yy))This is appears as an error message in azure data factory when I use it to schedule a databricks noteb...

  • 26178 Views
  • 5 replies
  • 3 kudos
Latest Reply
saipujari_spark
Databricks Employee
  • 3 kudos

@dataengineer17 It could be coming from the internal jobs service, If the issue persists I would recommend creating a support ticket.

  • 3 kudos
4 More Replies
charry
by New Contributor II
  • 13628 Views
  • 5 replies
  • 9 kudos

Creating a Spark DataFrame from a very large dataset

I am trying to create a DataFrame using Spark but am having some issues with the amount of data I'm using. I made a list with over 1 million entries through several API calls. The list was above the threshold for spark.rpc.message.maxSize and it was ...

  • 13628 Views
  • 5 replies
  • 9 kudos
Latest Reply
saipujari_spark
Databricks Employee
  • 9 kudos

Hey @charry Look at this KB article, this should help address the issue.https://kb.databricks.com/execution/spark-serialized-task-is-too-large

  • 9 kudos
4 More Replies
HariharaSam
by Contributor
  • 133289 Views
  • 6 replies
  • 3 kudos

Resolved! Alter Delta table column datatype

Hi ,I am having a delta table and table contains data and I need to alter the datatype for a particular column.For example :Consider the table name is A and column name is Amount with datatype Decimal(9,4).I need alter the Amount column datatype from...

  • 133289 Views
  • 6 replies
  • 3 kudos
Latest Reply
saipujari_spark
Databricks Employee
  • 3 kudos

Hi @HariharaSam The following documents the info about how to alter a Delta table schema.https://docs.databricks.com/delta/update-schema.html

  • 3 kudos
5 More Replies
Labels