cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

Issue while extracting value From Decimal Key is Json

data_guy
New Contributor

Hi Guys,

I have a JSON as the below structure where the key is as decimal.
{ "5.0": { "a": "15.92", "b": 0.0, "c": "15.92", "d": "637.14" }, "0.0": { "a": "15.92", "b": 0.0, "c": "15.92", "d": "637.14" } }
schema_of_json returns the following:

STRUCT<`0.0`: STRUCT<a: STRING, b: DOUBLE, c: STRING, d: STRING>, `5.0`: STRUCT<a: STRING, b: DOUBLE, c: STRING, d: STRING>>

I am facing an issue in extracting values against (a,b,c,d) for 5.0 and 0.0 in Databricks SQL. Following are the methods that I have tried, but still not able to extract value as it is coming NULL.

SELECT 
data,
data:`5.0` as method1,
get_json_object(data, '$.`5.0`') as method2
FROM staging.consumables.test;

 

Can you guys please help?

 

1 ACCEPTED SOLUTION

Accepted Solutions

raphaelblg
Databricks Employee
Databricks Employee

Hello @data_guy ,

I've performed a reproduction of your scenario and could successfully select all data. Please check screenshot below: 

raphaelblg_1-1716847137901.png

 

 
This is the source code:
import json

data_values = { 
               "5.0": { "a": "15.92", "b": 0.0, "c": "15.92", "d": "637.14" }, 
               "0.0": { "a": "15.92", "b": 0.0, "c": "15.92", "d": "637.14" } 
}

json_str = json.dumps(data_values)
dbutils.fs.put("dbfs:/tmp/data_temp.txt", json_str, True)

df = spark.read.json("dbfs:/tmp/data_temp.txt")
df.createOrReplaceTempView("JSON_TEST")

spark.sql("""
SELECT 
    `5.0`.a,
    `5.0`.b,
    `5.0`.c,
    `0.0`.a,
    `0.0`.b,
    `0.0`.c
FROM JSON_TEST
"""
).display()
Best regards,

Raphael Balogo
Sr. Technical Solutions Engineer
Databricks

View solution in original post

1 REPLY 1

raphaelblg
Databricks Employee
Databricks Employee

Hello @data_guy ,

I've performed a reproduction of your scenario and could successfully select all data. Please check screenshot below: 

raphaelblg_1-1716847137901.png

 

 
This is the source code:
import json

data_values = { 
               "5.0": { "a": "15.92", "b": 0.0, "c": "15.92", "d": "637.14" }, 
               "0.0": { "a": "15.92", "b": 0.0, "c": "15.92", "d": "637.14" } 
}

json_str = json.dumps(data_values)
dbutils.fs.put("dbfs:/tmp/data_temp.txt", json_str, True)

df = spark.read.json("dbfs:/tmp/data_temp.txt")
df.createOrReplaceTempView("JSON_TEST")

spark.sql("""
SELECT 
    `5.0`.a,
    `5.0`.b,
    `5.0`.c,
    `0.0`.a,
    `0.0`.b,
    `0.0`.c
FROM JSON_TEST
"""
).display()
Best regards,

Raphael Balogo
Sr. Technical Solutions Engineer
Databricks

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group