cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

thecodecache
by New Contributor II
  • 5531 Views
  • 2 replies
  • 0 kudos

Transpile a SQL Script into PySpark DataFrame API equivalent code

Input SQL Script (assume any dialect) : SELECT b.se10, b.se3, b.se_aggrtr_indctr, b.key_swipe_ind FROM (SELECT se10, se3, se_aggrtr_indctr, ROW_NUMBER() OVER (PARTITION BY SE10 ...

  • 5531 Views
  • 2 replies
  • 0 kudos
Latest Reply
MathieuDB
Databricks Employee
  • 0 kudos

Hello @thecodecache , Have a look the SQLGlot project: https://github.com/tobymao/sqlglot?tab=readme-ov-file#faq It can easily transpile SQL to Spark SQL, like that: import sqlglot from pyspark.sql import SparkSession # Initialize Spark session spar...

  • 0 kudos
1 More Replies
William_Scardua
by Valued Contributor
  • 12405 Views
  • 2 replies
  • 1 kudos

Pyspark or Scala ?

Hi guys,Many people use pyspark to develop their pipelines, in your opinion in which cases is it better to use one or the other? Or is it better to choose a single language?Thanks

  • 12405 Views
  • 2 replies
  • 1 kudos
Latest Reply
hari-prasad
Valued Contributor II
  • 1 kudos

Hi @William_Scardua,It is advisable to consider using Python (or PySpark) due to Spark's comprehensive API support for Python. Furthermore, Databricks currently supports Delta Live Tables (DLT) with Python, but does not support Scala at this time. Ad...

  • 1 kudos
1 More Replies
Gajju
by New Contributor
  • 603 Views
  • 1 replies
  • 0 kudos

[Deprecation Marker Required] : MERGE INTO Clause

Dear Friends:Considering MERGE INTO may generate wrong results(The APPLY CHANGES APIs: Simplify change data capture with Delta Live Tables | Databricks on AWS), may I ask that why it's API is still floating in technical documentation, without "Deprec...

  • 603 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16502773013
Databricks Employee
  • 0 kudos

Hello @Gajju , MERGE INTO is not being deprecated, APPLY CHANGES should be seen as an enhanced merge process in Delta Live Table that handles out of sequence records automatically as shown in  the example in the documentation shared. The notion of wr...

  • 0 kudos
milind2000
by New Contributor
  • 511 Views
  • 1 replies
  • 0 kudos

Question about Data Management for Supply-Demand Allocation

I have a scenario where I am trying to parallelize supply - demand allotment between sellers and buyers with many to many links. I am unsure of whether I can parallelize the calculation using PySpark operations. I have two columns to keep track of in...

  • 511 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Parallelizing supply-demand allotment in PySpark can be challenging due to the need for sequential updates to supply and demand values across rows. However, it is possible to achieve this using PySpark operations, though it may require a different ap...

  • 0 kudos
glevine
by New Contributor II
  • 1152 Views
  • 1 replies
  • 0 kudos

Resolved! DNSResolve Error while establishing JDBC connection to Azure Databricks

I am using the Databricks JDBC driver (https://databricks.com/spark/jdbc-drivers-download) to connect to Azure Databricks through a VPN.I am connecting through a SAAS low-code platform, Appian, so I don't have access to any more logs. We have set up ...

glevine_0-1737110151744.png
  • 1152 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

It seems that the DNS is not being able to resolve the domain name of your workspace, from the browser with the VPN connection are you able to access to it? 

  • 0 kudos
eballinger
by Contributor
  • 2657 Views
  • 6 replies
  • 2 kudos

Resolved! DLT Pipeline Event Logs

There seems to be a issue now with our DLT pipeline event logs. I am not sure if this is a recent bug or not (but they were ok in Dec). But the issue is in dev, qc and prod and we only have a couple days of history logs now visible in the UI.From wha...

  • 2657 Views
  • 6 replies
  • 2 kudos
Latest Reply
Walter_C
Databricks Employee
  • 2 kudos

Great to hear your issue got resolved.

  • 2 kudos
5 More Replies
Costas96
by New Contributor III
  • 1358 Views
  • 1 replies
  • 1 kudos

Resolved! Delta Live Tables: Add sequential column

Hello everyone, I have a DLT table (examp_table) and I want to add a sequential column that its values will be incremented every time a record gets ingested. I tried to do that with monotonically_increasing_id and Window.orderBy("a column") functions...

  • 1358 Views
  • 1 replies
  • 1 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 1 kudos

Hi @Costas96, Thanks for your question. You can use identity column feature. https://www.databricks.com/blog/2022/08/08/identity-columns-to-generate-surrogate-keys-are-now-available-in-a-lakehouse-near-you.html

  • 1 kudos
BenceCzako
by New Contributor II
  • 2625 Views
  • 5 replies
  • 0 kudos

Databricks mount bug

Hello,I have a weird problem in databricks for which I hope you can suggest some solutions.I have an azureml blob storage mounted to databricks with some folder structure that can be accessed from a notebook as/dbfs/mnt/azuremount/foo/bar/something.t...

  • 2625 Views
  • 5 replies
  • 0 kudos
Latest Reply
BenceCzako
New Contributor II
  • 0 kudos

Hello,Can you figure out the issue?

  • 0 kudos
4 More Replies
Costas96
by New Contributor III
  • 3125 Views
  • 7 replies
  • 0 kudos

Resolved! Delta Live Tables: Creating table with spark.sql and everything gets ingested at the first column

Hello everyone. I am new to DLT and I am trying to practice with it by doing some basic ingestions. I have a query like the following where I am getting data from two tables using UNION. I have noticed that everything gets ingested at the first colum...

  • 3125 Views
  • 7 replies
  • 0 kudos
Latest Reply
Costas96
New Contributor III
  • 0 kudos

Actually I found the solution by using spark.readStream to read the external tables a and b into two dataframes and then I just did  combined_df = df_a.union(df_b) to create my DLT table. Thank you! 

  • 0 kudos
6 More Replies
udara_zure
by New Contributor II
  • 1392 Views
  • 3 replies
  • 0 kudos

Resolved! what is the best way to deploy workflows with different notebooks to execute in different workspaces

I have a workflow in QA workspace that attached one notebook. I need to deploy the same workflow to PRD workspace , with all the notebooks in the azure devops repo and attche and run a different notebook in the PRD workflow.

  • 1392 Views
  • 3 replies
  • 0 kudos
Latest Reply
ashraf1395
Honored Contributor
  • 0 kudos

Databricks asset bundles can be a great solution for this. Clear and straightforward. https://docs.databricks.com/en/dev-tools/bundles/index.html

  • 0 kudos
2 More Replies
ashraf1395
by Honored Contributor
  • 807 Views
  • 1 replies
  • 2 kudos

Migrating data from hive metastore to unity catalog. data workflow is handled in fivetran

So in a uc migration project,we have a fivetran connection which handles most of the etl processes and writes data into hive metastore. we have migrated the schemas related to fivetran in UC. The workspace where fivetran was running had default catal...

  • 807 Views
  • 1 replies
  • 2 kudos
Latest Reply
saurabh18cs
Honored Contributor III
  • 2 kudos

Hi @ashraf1395 I can think of following :Fivetran needs to be aware of the new catalog structure. This typically involves updating the destination settings in Fivetran to point to the Unity Catalog. Navigate to the destination settings for your Datab...

  • 2 kudos
jb1z
by Contributor
  • 1612 Views
  • 5 replies
  • 0 kudos

Resolved! Query separate data loads from python spark.readStream

I am using python spark.readStream in a Delta Live Tables pipeline to read json data files from a S3 folder path. Each load is a daily snapshot of a very similar set of products showing changes in price and inventory. How do i distinguish and query e...

  • 1612 Views
  • 5 replies
  • 0 kudos
Latest Reply
jb1z
Contributor
  • 0 kudos

The problem was fixed by this importfrom pyspark.sql import functions as F then using F.lit() instead of F.col.withColumn('ingestion_date', F.lit(folder_date)) Sorry code formatting is not working at the moment.

  • 0 kudos
4 More Replies
minhhung0507
by Valued Contributor
  • 4068 Views
  • 6 replies
  • 5 kudos

Resolved! Issue with DeltaFileNotFoundException After Vacuum and Missing Data Changes in Delta Log

Dear Databricks experts,I encountered the following error in Databricks:`com.databricks.sql.transaction.tahoe.DeltaFileNotFoundException: [DELTA_EMPTY_DIRECTORY] No file found in the directory: gs://cimb-prod-lakehouse/bronze-layer/losdb/pl_message/_...

minhhung0507_2-1736940030237.png
  • 4068 Views
  • 6 replies
  • 5 kudos
Latest Reply
hari-prasad
Valued Contributor II
  • 5 kudos

Hi @minhhung0507,The VACUUM command on a Delta table does not delete the _delta_log folder, as this folder contains all the metadata related to the Delta table. The _delta_log folder acts as a pointer where all changes are tracked. In the event that ...

  • 5 kudos
5 More Replies
ErikJ
by New Contributor III
  • 6438 Views
  • 7 replies
  • 3 kudos

Errors calling databricks rest api /api/2.1/jobs/run-now with job_parameters

Hello! I have been using the databricks rest api for running workflows using this endpoint: /api/2.1/jobs/run-now. But now i wanted to also include job_parameters in my api call, i have put job parameters inside my workflow: param1, param2, and in my...

  • 6438 Views
  • 7 replies
  • 3 kudos
Latest Reply
slkdfuba
New Contributor II
  • 3 kudos

I encountered a null job_id in my post, when a notebook parameter was set in the job GUI. But it runs just fine (I get a valid job_id with active run) if I delete the notebook parameter in the job GUI.Is this a documented behavior, or a bug? If it's ...

  • 3 kudos
6 More Replies
diegohMoodys
by New Contributor
  • 720 Views
  • 1 replies
  • 0 kudos

JBDC RBMS Table Overwrite Transaction Incomplete

Spark version:  spark-3.4.1-bin-hadoop3JBDC Driver: mysql-connector-j-8.4.0.jarAssumptions:have all the proper read/write permissionsdataset isn't large: ~2 million recordsreading flat files, writing to a databaseDoes not read from the database at al...

diegohMoodys_0-1737041259601.png
  • 720 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @diegohMoodys, Can you try in debug mode? spark.sparkContext.setLogLevel("DEBUG")

  • 0 kudos
Labels