Decimal(32,6) datatype in Databricks - precision roundoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-21-2025 01:07 AM
Hello All,
I need your assistance. I recently started a migration project from Synapse Analytics to Databricks. While dealing with the datatypes, I came across a situation where in Dedicated Sql Pool the value is 0.033882, but in DataBricks the value is 0.033883. I am not using any condition, and both use Decimal(32,6) as datatype. This is happening for like 50% of the existing records. My assumption is that the 6th decimal is rounding the value based on the 7th decimal value. Want to know if this Databricks does it by itself or if my assumption is wrong. Your suggestions are really appreciated. Thankyou.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-24-2025 11:51 AM
Hi @Boyeenas ,
I believe your assumption is correct. Databricks is built on Apache Spark and the system applies rounding automatically based on the value of the subsequent digit. In your case, if the original value had a 7th decimal digit of 5 or higher, Spark’s default half-up rounding will round the 6th decimal place up resulting in 0.033883 instead of 0.033882.
From this documentation https://spark.apache.org/docs/3.5.3/sql-ref-datatypes.html , Spark’s DecimalType is based on Java’s BigDecimal.
Additional References:
https://spark.apache.org/docs/3.5.1/api/java/org/apache/spark/sql/types/DecimalType.html
Hope this helps!

