cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Dense rank possible bug

Łukasz
New Contributor III

I have the case of deduplicating data source over specific business key using dense_rank function. Currently the data source does not have any duplicates, so the function should return 1 in all cases. The issue is that dense rank does not return proper integer, although data type is of integer:

  • When filtering rank function equal to 1, it gives me "random" number of records. Most of dense_rank values with display values of 1 are getting dropped
  • When filtering rank < 1.1 it gives me the same results as above
  • When filtering rank > 0.9 it gives me the expected amount of rows
  • When casting rank function to double and then filtering it as equal to 1, it gives me expected number of rows

 It happens on databricks runtime 13.1, so I am assuming spark 3.4 has this issue. It works with no problem with runtime 12.2

1 ACCEPTED SOLUTION

Accepted Solutions

saipujari_spark
Valued Contributor
Valued Contributor

Hey @Łukasz 

Thanks for reporting.

As I see Spark 3.4.0 introduced an improvement that looks to be the cause for this issue.

Improvement: https://issues.apache.org/jira/browse/SPARK-37099

Similar Bug: https://issues.apache.org/jira/browse/SPARK-44448

This improvement [SPARK-37099] is included as part of DBR 13.1: https://docs.databricks.com/release-notes/runtime/13.1.html

That is the reason you are seeing this in DBR 13.1

As I have verified internally this seems to be fixed in DBR 13.1. I would request you to test it again once and let us know.

 

Thanks,
Saikrishna Pujari
Sr. Spark Technical Solutions Engineer, Databricks

View solution in original post

6 REPLIES 6

Lakshay
Esteemed Contributor
Esteemed Contributor

Could you share a code snippet of how you are applying the rank function?

Łukasz
New Contributor III
  SELECT * except(d.AssessmentNo, d.UnitClassSup, d.UnitTypeSup, d.UnitCodeSup, d.ProdUnitNo, d.QuestionAnswerId, d.hash_value, d.load_date) , dense_rank() OVER (PARTITION BY m.UnitClassSup, m.UnitTypeSup, m.UnitCodeSup, m.AssessmentYear, m.ProdUnitNo ORDER BY m.UpdDtime DESC, m.AnswerUpdDate DESC, m.QuestionAnswerId DESC) AS Rk
    FROM delta.`/mnt/silver/path_main` m
          JOIN delta.`/mnt/silver/path_detail` d
            ON (ms.AssessmentNo = qe.AssessmentNo
            AND ms.UnitClassSup = qe.UnitClassSup
            AND ms.UnitTypeSup = qe.UnitTypeSup
            AND ms.UnitCodeSup = qe.UnitCodeSup
            AND ms.ProdUnitNo = qe.ProdUnitNo
            AND ms.QuestionAnswerId = qe.QuestionAnswerId )
 
This is saved as cte and then queried with filter rk = 1

Lakshay
Esteemed Contributor
Esteemed Contributor

I tried running a dense rank query using DBR 13.1. But I do not see this issue. Could you try a simple dense rank query on a table

saipujari_spark
Valued Contributor
Valued Contributor

Hey @Łukasz 

Thanks for reporting.

As I see Spark 3.4.0 introduced an improvement that looks to be the cause for this issue.

Improvement: https://issues.apache.org/jira/browse/SPARK-37099

Similar Bug: https://issues.apache.org/jira/browse/SPARK-44448

This improvement [SPARK-37099] is included as part of DBR 13.1: https://docs.databricks.com/release-notes/runtime/13.1.html

That is the reason you are seeing this in DBR 13.1

As I have verified internally this seems to be fixed in DBR 13.1. I would request you to test it again once and let us know.

 

Thanks,
Saikrishna Pujari
Sr. Spark Technical Solutions Engineer, Databricks

Hello @Saniam 

Thanks for answer, I have just tested and it seems to be working fine both in 13.1 and 13.2

On the other note, can you help me understand how the releases are done for spark? The one that you mention is said to be released in 3.5, which should come in new databricks runtime release.

Kind regards,

Łukasz

Hey @Łukasz it's because any fixes which are important are backported to older spark versions in DBR, that's the reason you see this fixed in DBR 13.1

Thanks,
Saikrishna Pujari
Sr. Spark Technical Solutions Engineer, Databricks
Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.