cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

JSON validation is getting failed after writing Pyspark dataframe to json format

SailajaB
Valued Contributor III

Hi

We have to convert transformed dataframe to json format. So we used write and json format on top of final dataframe to convert it to json. But when we validating the output json its not in proper json format.

Could you please provide your suggestion that how can achieve this in databricks pyspark

Thank you

1 ACCEPTED SOLUTION

Accepted Solutions

AmanSehgal
Honored Contributor III

Converting 400mn+ rows into JSON, in my opinion is not a good solution, as it'll take a lot of space for no reason.

Anyway, so you've JSONlines in file but you want it to be JSON object only in the file. There's a simpler way to do this.

Let Spark write your data with 400mn+ records into 'x' number of JSON files.

Since databricks cells supports shell commands, you can run following script to convert JSONL to JSON files. Run it recursively or however you would like it.

Let's say your blob store location is mounted on dbfs in mnt directory.

%sh
cat /dbfs/mnt/<path to JSONlines input file> | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/,/g' | sed 's/n/,/' | sed 's/^/[/'| sed 's/$/]/' > /dbfs/mnt/<path to JSON output file>

The above command should convert your files in seconds.

Do share on how it goes with this approach.

Credit: Medium post.

View solution in original post

12 REPLIES 12

AmanSehgal
Honored Contributor III

Could you please share:

  1. sample of dataframe and the improper JSON received
  2. your code to convert the data to JSON format

SailajaB
Valued Contributor III

Hi Melbourne,

Thank you for the reply

We are using below code to convert to JSON

df.coalesce(1).write.format("json").save(dataLocation)

We are receiving in the o/p as below

{"col1":"A","col2":"B"}

{"col1":"C","col2":"D"}

We are excepting in JSON format as below

[{"col1":"A","col2":"B"},

{"col1":"C","col2":"D"}]

Thank you

AmanSehgal
Honored Contributor III

What you're seeing in the file is JSONlines. The difference between that and JSON is absence of square brackets and commas after every record.

You shouldn't have face problem in reading the JSON data back using Spark.

SailajaB
Valued Contributor III

But if we use the output in other Azure resource it will get failed right

AmanSehgal
Honored Contributor III

Could you provide an example where this could be an issue? There are libraries available that read JSONlines from a JSON file. You can use them, or maybe you can add a transformation logic to process JSON files before the resource consumes it.

SailajaB
Valued Contributor III

Hi,

Is there any way to produce proper JSON in databricks itself?

Thank you

AmanSehgal
Honored Contributor III

Convert your dataframe in to pandas and write to your storage using `.to_json(<path>, orient='records').

To get desired output, set orient as records.

Here is AWS S3 equivalent code:

#Creating Session using Boto3.
session = boto3.Session(
aws_access_key_id='<key ID>',
aws_secret_access_key='<secret_key>'
)
 
#Create s3 session with boto3
s3 = session.resource('s3')
 
json_buffer = io.StringIO()
 
# Create dataframe and convert to pandas
df = spark.range(4).withColumn("organisation", lit("Databricks"))
df_p = df.toPandas()
df_p.to_json(json_buffer, orient='records')
 
#Create s3 object
object = s3.Object('<bucket-name>', '<JSON file name>')
 
#Put the object into bucket
 result = object.put(Body=json_buffer.getvalue())

Hope this helps.

SailajaB
Valued Contributor III

Thank you for the reply.

We tried to convert Pyspark df to Pandas df to achieve the expected JSON format. But due to below issues we stopped the conversion process

  1. Our Pyspark dataframe is very huge we can say around 400 Million+ rows , so our output should be in multiple files. As pyspark df is distributed one we no need to worry about for the multiple file logic. But where as pandas df is single cpu one it will generate a huge single output file.
  2. When we tried to convert the pyspark df to pandas df its getting failed as our dataframe contains deeply nested attributes

AmanSehgal
Honored Contributor III

But by using coalesce(1) in you pyspark df, you're doing the same thing. It'll be processed on one node.

SailajaB
Valued Contributor III

yes, sorry its my bad. We removed that part

AmanSehgal
Honored Contributor III

Converting 400mn+ rows into JSON, in my opinion is not a good solution, as it'll take a lot of space for no reason.

Anyway, so you've JSONlines in file but you want it to be JSON object only in the file. There's a simpler way to do this.

Let Spark write your data with 400mn+ records into 'x' number of JSON files.

Since databricks cells supports shell commands, you can run following script to convert JSONL to JSON files. Run it recursively or however you would like it.

Let's say your blob store location is mounted on dbfs in mnt directory.

%sh
cat /dbfs/mnt/<path to JSONlines input file> | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/,/g' | sed 's/n/,/' | sed 's/^/[/'| sed 's/$/]/' > /dbfs/mnt/<path to JSON output file>

The above command should convert your files in seconds.

Do share on how it goes with this approach.

Credit: Medium post.

Anonymous
Not applicable

@Sailaja B​ - Does @Aman Sehgal​'s most recent answer help solve the problem? If it does, would you be happy to mark their answer as best?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group