cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

[NUMERIC_VALUE_OUT_OF_RANGE.WITHOUT_SUGGESTION] The -12874815911431.6200000000 rounded half up from

flashmav1
New Contributor

I am using dataricks version 15.4 and getting below error whicle reading from jdbc and writing to aws S3 location:

 

[NUMERIC_VALUE_OUT_OF_RANGE.WITHOUT_SUGGESTION] The -12874815911431.6200000000 rounded half up from -12874815911431.6200000000 cannot be represented as Decimal(23, 10). SQLSTATE: 22003

I have tried changing the schema to string,decimal(38,10) and doubletype but the erroe is always same.

1 ACCEPTED SOLUTION

Accepted Solutions

szymon_dybczak
Esteemed Contributor III

Hi @flashmav1 ,

In first case it didn't work because your number has 14 digits before the decimal point. DECIMAL(23,10) only allows 13 digits before the decimal point. In general, when you work with Decimal you deal with 2 things:

- Precision (23): Total number of digits in number - in your case 23

- Scale (10): Number of digits after the decimal point - in your case 10

- available for integer part: 23 - 10 = 13 digits

Regarding your second attempt. According to docs one reason that can cause this error to appear is when any value in source column have an actual precision greater 38. So you can try to query your source database. Maybe you have some weird outliers in data.

You can also try pushing the casting to your source and see if that works. Start by casting to a string

spark.read.format("jdbc")
.option("url", jdbcUrl)
.option("query", "select CAST(your_column AS STRING) from your_table")
.load()

 

View solution in original post

6 REPLIES 6

Hubert-Dudek
Esteemed Contributor III

I think it is related to number before conversion to Decimal. Please check what type is it?

szymon_dybczak
Esteemed Contributor III

Hi @flashmav1 ,

In first case it didn't work because your number has 14 digits before the decimal point. DECIMAL(23,10) only allows 13 digits before the decimal point. In general, when you work with Decimal you deal with 2 things:

- Precision (23): Total number of digits in number - in your case 23

- Scale (10): Number of digits after the decimal point - in your case 10

- available for integer part: 23 - 10 = 13 digits

Regarding your second attempt. According to docs one reason that can cause this error to appear is when any value in source column have an actual precision greater 38. So you can try to query your source database. Maybe you have some weird outliers in data.

You can also try pushing the casting to your source and see if that works. Start by casting to a string

spark.read.format("jdbc")
.option("url", jdbcUrl)
.option("query", "select CAST(your_column AS STRING) from your_table")
.load()

 

I was working in databricks pyspark code side becuase the error seems like the type issue in pyspark but the actual issue is reading the 14 interger value from source. So casting to string(in my case nvarchar) worked.

szymon_dybczak
Esteemed Contributor III

Cool, glad that it worked 😉

Hubert-Dudek
Esteemed Contributor III

HubertDudek_0-1761508257951.png

 

I am able to read it like this but unable to write to s3 locaition