cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Spark adding NUL

vasu_sethia
New Contributor II

Hi I have a DF which contains Json string so the value is like {"key": Value, "anotherKey": anotherValue}, so when I am trying to write the DF containing this string to the CSV, spark is ​adding NUL character af the front of this line and at the end, so the final line is like

NUL{"​key": Value, "anotherKey": anotherValue}NUL

I really don't want this to happen, how can I prevent this?

The code I am using is

df.coalesce(1).write.format("csv").option("header", false).option("quote", "").save(path)​

8 REPLIES 8

Piper_Wilson
New Contributor III

Hello, @Vasu Sethia​! My name is Piper and I'm one of the moderators for Databricks. Welcome and thank you for your question. Let's give it a bit longer to see what the community has to say. Otherwise, we'll circle back around soon.

-werners-
Esteemed Contributor III

Are you writing the actual json string in a csv, or do you flatten the json into a table structure and write that to csv?

I have a value in my dataframe column in the format of Json string, I am trying to write the dataframe to the csv

_________

Value

__________

{"Name": ABC, "age": 12}

__________​

-werners-
Esteemed Contributor III

Hard to tell without having the code, but it might be the separator for the csv? You do have comma's in the string, and comma is the default separator for csv.

df.coalesce(1).write.format("csv").option("header", false).option("quote", "").save(path)​

This is the code and yes I do have comma in the string ​

-werners-
Esteemed Contributor III

I mean the code for 'df'.

Can you try to write with option("sep", ";")?

Thank you so much, this worked for me​

hi @Vasu Sethia​ ,

If Werners' response fully answered your question, would you be happy to mark the answer as best so that others can quickly find the solution?