cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Dense rank possible bug

ลukasz
New Contributor III

I have the case of deduplicating data source over specific business key using dense_rank function. Currently the data source does not have any duplicates, so the function should return 1 in all cases. The issue is that dense rank does not return proper integer, although data type is of integer:

  • When filtering rank function equal to 1, it gives me "random" number of records. Most of dense_rank values with display values of 1 are getting dropped
  • When filtering rank < 1.1 it gives me the same results as above
  • When filtering rank > 0.9 it gives me the expected amount of rows
  • When casting rank function to double and then filtering it as equal to 1, it gives me expected number of rows

 It happens on databricks runtime 13.1, so I am assuming spark 3.4 has this issue. It works with no problem with runtime 12.2

1 ACCEPTED SOLUTION

Accepted Solutions

saipujari_spark
Databricks Employee
Databricks Employee

Hey @ลukasz 

Thanks for reporting.

As I see Spark 3.4.0 introduced an improvement that looks to be the cause for this issue.

Improvement: https://issues.apache.org/jira/browse/SPARK-37099

Similar Bug: https://issues.apache.org/jira/browse/SPARK-44448

This improvement [SPARK-37099] is included as part of DBR 13.1: https://docs.databricks.com/release-notes/runtime/13.1.html

That is the reason you are seeing this in DBR 13.1

As I have verified internally this seems to be fixed in DBR 13.1. I would request you to test it again once and let us know.

 

Thanks,
Saikrishna Pujari
Sr. Spark Technical Solutions Engineer, Databricks

View solution in original post

6 REPLIES 6

Lakshay
Databricks Employee
Databricks Employee

Could you share a code snippet of how you are applying the rank function?

ลukasz
New Contributor III
  SELECT * except(d.AssessmentNo, d.UnitClassSup, d.UnitTypeSup, d.UnitCodeSup, d.ProdUnitNo, d.QuestionAnswerId, d.hash_value, d.load_date) , dense_rank() OVER (PARTITION BY m.UnitClassSup, m.UnitTypeSup, m.UnitCodeSup, m.AssessmentYear, m.ProdUnitNo ORDER BY m.UpdDtime DESC, m.AnswerUpdDate DESC, m.QuestionAnswerId DESC) AS Rk
    FROM delta.`/mnt/silver/path_main` m
          JOIN delta.`/mnt/silver/path_detail` d
            ON (ms.AssessmentNo = qe.AssessmentNo
            AND ms.UnitClassSup = qe.UnitClassSup
            AND ms.UnitTypeSup = qe.UnitTypeSup
            AND ms.UnitCodeSup = qe.UnitCodeSup
            AND ms.ProdUnitNo = qe.ProdUnitNo
            AND ms.QuestionAnswerId = qe.QuestionAnswerId )
 
This is saved as cte and then queried with filter rk = 1

Lakshay
Databricks Employee
Databricks Employee

I tried running a dense rank query using DBR 13.1. But I do not see this issue. Could you try a simple dense rank query on a table

saipujari_spark
Databricks Employee
Databricks Employee

Hey @ลukasz 

Thanks for reporting.

As I see Spark 3.4.0 introduced an improvement that looks to be the cause for this issue.

Improvement: https://issues.apache.org/jira/browse/SPARK-37099

Similar Bug: https://issues.apache.org/jira/browse/SPARK-44448

This improvement [SPARK-37099] is included as part of DBR 13.1: https://docs.databricks.com/release-notes/runtime/13.1.html

That is the reason you are seeing this in DBR 13.1

As I have verified internally this seems to be fixed in DBR 13.1. I would request you to test it again once and let us know.

 

Thanks,
Saikrishna Pujari
Sr. Spark Technical Solutions Engineer, Databricks

Hello @Saniam 

Thanks for answer, I have just tested and it seems to be working fine both in 13.1 and 13.2

On the other note, can you help me understand how the releases are done for spark? The one that you mention is said to be released in 3.5, which should come in new databricks runtime release.

Kind regards,

ลukasz

Hey @ลukasz it's because any fixes which are important are backported to older spark versions in DBR, that's the reason you see this fixed in DBR 13.1

Thanks,
Saikrishna Pujari
Sr. Spark Technical Solutions Engineer, Databricks

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group