02-09-2022 10:39 PM
Hi
We have to convert transformed dataframe to json format. So we used write and json format on top of final dataframe to convert it to json. But when we validating the output json its not in proper json format.
Could you please provide your suggestion that how can achieve this in databricks pyspark
Thank you
02-10-2022 10:25 PM
Converting 400mn+ rows into JSON, in my opinion is not a good solution, as it'll take a lot of space for no reason.
Anyway, so you've JSONlines in file but you want it to be JSON object only in the file. There's a simpler way to do this.
Let Spark write your data with 400mn+ records into 'x' number of JSON files.
Since databricks cells supports shell commands, you can run following script to convert JSONL to JSON files. Run it recursively or however you would like it.
Let's say your blob store location is mounted on dbfs in mnt directory.
%sh
cat /dbfs/mnt/<path to JSONlines input file> | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/,/g' | sed 's/n/,/' | sed 's/^/[/'| sed 's/$/]/' > /dbfs/mnt/<path to JSON output file>
The above command should convert your files in seconds.
Do share on how it goes with this approach.
Credit: Medium post.
02-10-2022 12:33 AM
Could you please share:
02-10-2022 01:37 AM
Hi Melbourne,
Thank you for the reply
We are using below code to convert to JSON
df.coalesce(1).write.format("json").save(dataLocation)
We are receiving in the o/p as below
{"col1":"A","col2":"B"}
{"col1":"C","col2":"D"}
We are excepting in JSON format as below
[{"col1":"A","col2":"B"},
{"col1":"C","col2":"D"}]
Thank you
02-10-2022 04:49 AM
What you're seeing in the file is JSONlines. The difference between that and JSON is absence of square brackets and commas after every record.
You shouldn't have face problem in reading the JSON data back using Spark.
02-10-2022 05:24 AM
But if we use the output in other Azure resource it will get failed right
02-10-2022 06:06 AM
Could you provide an example where this could be an issue? There are libraries available that read JSONlines from a JSON file. You can use them, or maybe you can add a transformation logic to process JSON files before the resource consumes it.
02-10-2022 06:37 AM
Hi,
Is there any way to produce proper JSON in databricks itself?
Thank you
02-10-2022 07:20 AM
Convert your dataframe in to pandas and write to your storage using `.to_json(<path>, orient='records').
To get desired output, set orient as records.
Here is AWS S3 equivalent code:
#Creating Session using Boto3.
session = boto3.Session(
aws_access_key_id='<key ID>',
aws_secret_access_key='<secret_key>'
)
#Create s3 session with boto3
s3 = session.resource('s3')
json_buffer = io.StringIO()
# Create dataframe and convert to pandas
df = spark.range(4).withColumn("organisation", lit("Databricks"))
df_p = df.toPandas()
df_p.to_json(json_buffer, orient='records')
#Create s3 object
object = s3.Object('<bucket-name>', '<JSON file name>')
#Put the object into bucket
result = object.put(Body=json_buffer.getvalue())
Hope this helps.
02-10-2022 09:06 PM
Thank you for the reply.
We tried to convert Pyspark df to Pandas df to achieve the expected JSON format. But due to below issues we stopped the conversion process
02-10-2022 09:08 PM
But by using coalesce(1) in you pyspark df, you're doing the same thing. It'll be processed on one node.
02-10-2022 09:09 PM
yes, sorry its my bad. We removed that part
02-10-2022 10:25 PM
Converting 400mn+ rows into JSON, in my opinion is not a good solution, as it'll take a lot of space for no reason.
Anyway, so you've JSONlines in file but you want it to be JSON object only in the file. There's a simpler way to do this.
Let Spark write your data with 400mn+ records into 'x' number of JSON files.
Since databricks cells supports shell commands, you can run following script to convert JSONL to JSON files. Run it recursively or however you would like it.
Let's say your blob store location is mounted on dbfs in mnt directory.
%sh
cat /dbfs/mnt/<path to JSONlines input file> | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/,/g' | sed 's/n/,/' | sed 's/^/[/'| sed 's/$/]/' > /dbfs/mnt/<path to JSON output file>
The above command should convert your files in seconds.
Do share on how it goes with this approach.
Credit: Medium post.
03-02-2022 09:01 AM
@Sailaja B - Does @Aman Sehgal's most recent answer help solve the problem? If it does, would you be happy to mark their answer as best?
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group