10-28-2021 07:39 PM
Hi I have a DF which contains Json string so the value is like {"key": Value, "anotherKey": anotherValue}, so when I am trying to write the DF containing this string to the CSV, spark is adding NUL character af the front of this line and at the end, so the final line is like
NUL{"key": Value, "anotherKey": anotherValue}NUL
I really don't want this to happen, how can I prevent this?
The code I am using is
df.coalesce(1).write.format("csv").option("header", false).option("quote", "").save(path)
10-28-2021 08:01 PM
Hello, @Vasu Sethia! My name is Piper and I'm one of the moderators for Databricks. Welcome and thank you for your question. Let's give it a bit longer to see what the community has to say. Otherwise, we'll circle back around soon.
10-29-2021 01:14 AM
Are you writing the actual json string in a csv, or do you flatten the json into a table structure and write that to csv?
10-29-2021 02:09 AM
I have a value in my dataframe column in the format of Json string, I am trying to write the dataframe to the csv
_________
Value
__________
{"Name": ABC, "age": 12}
__________
10-29-2021 04:30 AM
Hard to tell without having the code, but it might be the separator for the csv? You do have comma's in the string, and comma is the default separator for csv.
10-29-2021 05:10 AM
df.coalesce(1).write.format("csv").option("header", false).option("quote", "").save(path)
This is the code and yes I do have comma in the string
10-29-2021 06:41 AM
I mean the code for 'df'.
Can you try to write with option("sep", ";")?
10-29-2021 08:11 AM
Thank you so much, this worked for me
10-29-2021 04:04 PM
hi @Vasu Sethia ,
If Werners' response fully answered your question, would you be happy to mark the answer as best so that others can quickly find the solution?