cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

oleole
by Contributor
  • 8375 Views
  • 3 replies
  • 2 kudos

Resolved! Using "FOR XML PATH" in Spark SQL in sql syntax

I'm using spark version 3.2.1 on databricks (DBR 10.4 LTS), and I'm trying to convert sql server sql query to a new sql query that runs on a spark cluster using spark sql in sql syntax. However, spark sql does not seem to support XML PATH as a functi...

input output
  • 8375 Views
  • 3 replies
  • 2 kudos
Latest Reply
oleole
Contributor
  • 2 kudos

Posting the solution that I ended up using:%sql DROP TABLE if exists UserCountry; CREATE TABLE if not exists UserCountry ( UserID INT, Country VARCHAR(5000) ); INSERT INTO UserCountry SELECT L.UserID AS UserID, CONCAT_WS(',', co...

  • 2 kudos
2 More Replies
oleole
by Contributor
  • 11474 Views
  • 1 replies
  • 1 kudos

Resolved! MERGE to update a column of a table using Spark SQL

Coming from MS SQL background, I'm trying to write a query in Spark SQL that simply update a column value of table A (source table) by INNER JOINing a new table B with a filter.MS SQL query looks like this:UPDATE T SET T.OfferAmount = OSE.EndpointEve...

  • 11474 Views
  • 1 replies
  • 1 kudos
Latest Reply
oleole
Contributor
  • 1 kudos

Posting answer to my question:   MERGE into TempOffer VIEW USING OfferSeq OSE ON VIEW.OfferId = OSE.OfferID AND OSE.OfferId = 1 WHEN MATCHED THEN UPDATE set VIEW.OfferAmount = OSE.EndpointEventAmountValue;

  • 1 kudos
JJL
by New Contributor II
  • 14477 Views
  • 3 replies
  • 3 kudos

Resolved! Does Spark SQL can perform UPDATE with INNER JOIN and LIKE with '%' + [column] + '%' ?

Hi All,I came from MS SQL and just started to learning more about Spark SQLHere is one part that I'm trying to perform. In MS SQL, it can be easily done, but it seems like it doesn't in SparkSo, I want to make a simple update to the record, if the co...

  • 14477 Views
  • 3 replies
  • 3 kudos
Latest Reply
oleole
Contributor
  • 3 kudos

@Hubert Dudek​ Hello, I'm having the same issue with using UPDATE in spark sql and came across your answer. When you say "replace source_table_reference with view" in MERGE, do you mean to replace "P" with "VIEW" that looks something as below:%sql ME...

  • 3 kudos
2 More Replies
ramankr48
by Contributor II
  • 17467 Views
  • 5 replies
  • 8 kudos

Resolved! How to get all the tables name with a specific column or columns in a database?

let's say there is a database db in which 700 tables are there, and we need to find all the tables name in which column "project_id" is present.just an example for ubderstanding the questions.

  • 17467 Views
  • 5 replies
  • 8 kudos
Latest Reply
Anonymous
Not applicable
  • 8 kudos

databaseName = "db" desiredColumn = "project_id" database = spark.sql(f"show tables in {databaseName} ").collect() tablenames = [] for row in database: cols = spark.table(row.tableName).columns if desiredColumn in cols: tablenames.append(row....

  • 8 kudos
4 More Replies
sujai_sparks
by New Contributor III
  • 17444 Views
  • 14 replies
  • 15 kudos

Resolved! How to convert records in Azure Databricks delta table to a nested JSON structure?

Let's say I have a delta table in Azure databricks that stores the staff details (denormalized).  I wanted to export the data in the JSON format and save it as a single file on a storage location. I need help with the databricks sql query to group/co...

2023-02-24 22_08_34-MyTest - Databricks
  • 17444 Views
  • 14 replies
  • 15 kudos
Latest Reply
NateAnth
Databricks Employee
  • 15 kudos

Glad it worked for you!!

  • 15 kudos
13 More Replies
EmilioGC
by New Contributor III
  • 5523 Views
  • 5 replies
  • 7 kudos

Resolved! Why was SQL formatting removed inside spark.sql functions? Now it looks like a plain string.

Previously we were able to see SQL queries inside spark.sql() like this:But now it just looks like a plain string: I know it's not a big issue, but it's still annoying to have to code in SQL while having it all be blue, it makes debugging more cumber...

old format new format
  • 5523 Views
  • 5 replies
  • 7 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 7 kudos

Hi @Emilio Garza​,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.

  • 7 kudos
4 More Replies
Twilight
by New Contributor III
  • 3193 Views
  • 2 replies
  • 0 kudos

How to make backreferences in regexp_replace repl string work correctly in Databricks SQL?

Both of these work in Spark SQL:regexp_replace('1234567890abc', '^(?<one>\\w)(?<two>\\w)(?<three>\\w)', '$1') regexp_replace('1234567890abc', '^(?<one>\\w)(?<two>\\w)(?<three>\\w)', '${one}')However, neither work in Databricks SQL. I found that this ...

  • 3193 Views
  • 2 replies
  • 0 kudos
Latest Reply
User16764241763
Honored Contributor
  • 0 kudos

Hello @Stephen Wilcoxon​ Could you please share the expected output in Spark SQL?

  • 0 kudos
1 More Replies
lambarc
by New Contributor II
  • 13372 Views
  • 7 replies
  • 13 kudos

How to read file in pyspark with “]|[” delimiter

The data looks like this: pageId]|[page]|[Position]|[sysId]|[carId 0005]|[bmw]|[south]|[AD6]|[OP4 There are atleast 50 columns and millions of rows. I did try to use below code to read: dff = sqlContext.read.format("com.databricks.spark.csv").option...

  • 13372 Views
  • 7 replies
  • 13 kudos
Latest Reply
rohit199912
New Contributor II
  • 13 kudos

you might also try the blow option.1). Use a different file format: You can try using a different file format that supports multi-character delimiters, such as text JSON.2). Use a custom Row class: You can write a custom Row class to parse the multi-...

  • 13 kudos
6 More Replies
quakenbush
by Contributor
  • 4131 Views
  • 3 replies
  • 4 kudos

Resolved! Does Databricks offer something like Oracle's dblink?

I am aware, I can load anything into a DataFrame using JDBC, that works well from Oracle sources. Is there an equivalent in Spark SQL, so I can combine datasets as well?Basically something like so - you get the idea...select lt.field1, rt.fie...

  • 4131 Views
  • 3 replies
  • 4 kudos
Latest Reply
quakenbush
Contributor
  • 4 kudos

Thanks everyone for helping.

  • 4 kudos
2 More Replies
Manojkumar
by New Contributor II
  • 3566 Views
  • 4 replies
  • 0 kudos

Can we assigee default value in select columns in Spark sql when the column is not present?

Im reading avro file and loading into table. The avro data is nested data.Now from this table im trying to extract the necessary elements using spark sql. Using explode function when there is array data. Now the challenge is there are cases like the ...

  • 3566 Views
  • 4 replies
  • 0 kudos
Latest Reply
UmaMahesh1
Honored Contributor III
  • 0 kudos

Hi @manoj kumar​ An easiest way would be to make use of unmanaged delta tables and while loading data into the path of that table, you can enable mergeSchema to be true. This handles all the schema differences, incase column is not present as null an...

  • 0 kudos
3 More Replies
aicd_de
by New Contributor III
  • 2501 Views
  • 1 replies
  • 0 kudos

Resolved! Error Using spark.catalog.dropTempView()

I have a set of Spark Dataframes that I convert into Temp Views to run Spark SQL with. Then, I delete them after my logic/use is complete. The delete step throws an odd error that I am not sure how to fix. Looking for some tips on fixing it. As a not...

  • 2501 Views
  • 1 replies
  • 0 kudos
Latest Reply
aicd_de
New Contributor III
  • 0 kudos

            spark.sql("DROP TABLE "+prefix_updates)            spark.sql("DROP TABLE "+prefix_main)Fixed it for me.

  • 0 kudos
pvm26042000
by New Contributor III
  • 980 Views
  • 1 replies
  • 3 kudos

Spark SQL & Spark ML

I am using Spark SQL to import their data into a machine learning pipeline. Once data is imported I want performs machine learning tasks using Spark ML. So I should use what compute tools is best suited for this use case? Please help me!!! Thank you ...

  • 980 Views
  • 1 replies
  • 3 kudos
Latest Reply
Debayan
Databricks Employee
  • 3 kudos

Hi, please refer https://docs.databricks.com/machine-learning/index.html, please let us know if this helps.

  • 3 kudos
pvm26042000
by New Contributor III
  • 964 Views
  • 1 replies
  • 2 kudos

I am using Spark SQL to import their data into a machine learning pipeline. Once data is imported I want performs machine learning tasks using Spark...

I am using Spark SQL to import their data into a machine learning pipeline. Once data is imported I want performs machine learning tasks using Spark ML. So I should use what compute tools is best suited for this use case? Please help me!!! Thank y...

  • 964 Views
  • 1 replies
  • 2 kudos
Latest Reply
Debayan
Databricks Employee
  • 2 kudos

Hi, please refer https://docs.databricks.com/machine-learning/index.html, please let us know if this helps.

  • 2 kudos
CBull
by New Contributor III
  • 1841 Views
  • 3 replies
  • 2 kudos

Spark Notebook to import data into Excel

Is there a way to create a notebook that will take the SQL that I want to put into the Notebook and populate Excel daily and send it to a particular person?

  • 1841 Views
  • 3 replies
  • 2 kudos
Latest Reply
Meghala
Valued Contributor II
  • 2 kudos

@Aviral Bhardwaj​  thanks for this, I was needed this info

  • 2 kudos
2 More Replies
ramankr48
by Contributor II
  • 2923 Views
  • 2 replies
  • 3 kudos

Issue with identity key column in databricks?

For the identity key I've used both GENERATED ALWAYS AS IDENTITY(start with 1 increment by 1) andGENERATED BY DEFAULT AS IDENTITY(start with 1 increment by 1)but in both cases, if I'm running my script once then it is fine (identity key is working as...

  • 2923 Views
  • 2 replies
  • 3 kudos
Latest Reply
lizou
Contributor II
  • 3 kudos

yes, by default option allow duplicated values per design.I will avoid this option and use only use GENERATED ALWAYS AS IDENTITY Using BY DEFAULT option is worse than not using it at all in BY Default option, If I forget to set starting value, the ID...

  • 3 kudos
1 More Replies
Labels