cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Reading JSON from Databricks Workspace

Saf4Databricks
New Contributor III

I am using second example from Databricks` official document here: Work with workspace files. But I'm getting following error:

Question: What could be a cause of the error, and how can we fix it?

ERROR: Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default)

Code:

 

%sql
SELECT * FROM json.`file:/Workspace/Users/myusername@outlook.com/myJsonFile_in_Workspace.json`;

 

Json file in my Databricks Workspace:

 

{
"header": {
"platform": "atm",
"version": "2.0"
},
"details": [
{
"abc": "3",
"def": "4"
},
{
"abc": "5",
"def": "6"
}
]
}

 

Remarks: Minimizing JSON to single file is one possibility. The json used in my post is for explaining the question only. The actual Json that I am using is quite large and complex - and in such cases, Apache Spark's official document recommends: `For a regular multi-line JSON file, set the multiline parameter to True` - as shown in this example. But I'm not sure how to use this option when you're reading json from a `Databrick Workspace` that's what my code above is doing.

4 REPLIES 4

szymon_dybczak
Contributor III

Hi @Saf4Databricks ,

As you said, you probably need to add multiline options to make it work. You can use this option when creating temporary view or using pyspark api. Below is example of creating temporary view:

 

CREATE TEMPORARY VIEW multilineJson
USING json
OPTIONS (path="file:/Workspace/Users/myusername@outlook.com/myJsonFile_in_Workspace.json",multiline=true)

 




Hi @szymon_dybczak Thank you for sharing your thoughts. I probably should have elaborated bit more on my question. Yes, we can use Apache Spark to read a Json file as shown in the first example of the link provided in my post above. For example, in the following code, I modified their first Apache Spark example by adding multiline option - and it worked fine:

 

spark.read.option("multiline","true").format("json").load("file:/Workspace/Users/myusername@outlook.com/myJsonFile_in_Workspace.json").show()

 

 But in their second example from that same link stated that you can directly read (query) a file from your Databricks workspace by using Spark SQL and Databricks SQL as follows. So, my question would be if the file is multiline Json file, then how would you use multiline option in their example given below?

 

SELECT * FROM json.`file:/Workspace/Users/<user-folder>/file.json`;

 

 

Hi @Saf4Databricks ,

You can try following:

 

 

SELECT *
FROM json.`file:/Workspace/Users/<user-folder>/file.json`
OPTIONS(multiline,true);

 

 

Hi @Saf4Databricks ,

I tried the above and it didn't work. I think there is no way to pass this option using this syntax. At least there is no entry in documentation of how to do it. But the way I proposed in my previous post aligns with what databricks documentation is saying. So, direct equivalent of spark.read.option("multiline", "true").format("json")  in SQL API is to create view and use OPTIONS clause to pass required parameters

JSON file - Azure Databricks | Microsoft Learn

szymon_dybczak_0-1727514594246.png

 

 

 

 

 

 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group