cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

DataFrame to CSV write has issues due to multiple commas inside an row value

sai_sathya
New Contributor III

Hi all

iam working on a data containing JSON fields with embedded commas into CSV format. iam facing challenges due to the commas 

within the JSON being misinterpreted as column delimiters during the conversion process.

i tried several methods to modify data and tried to escape the comma comes under the row value but it doesent works at the same time i should make sure the JSON code has to be in correct syntax as that will be used in some other place within the project 

the sample pyspark code and data that i worked with 

 

 

 

 

 

 

from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, StringType


schema = StructType([
    StructField("IDcol1", IntegerType(), True),
    StructField("IDcol2", IntegerType(), True),
    StructField("AdditionalRequestParameters", StringType(), True),
    StructField("RequestURL", StringType(), True)
])

data = [
    (1, 2, "{'Locale':'en','KnowledgeType':[{'Name':'IndustryKnowledge'},{'Name':'KnowledgeIndustry'}],'SegmentCountry':[{'Country':'US','IndustrySegment':'IP'}],'Setversion':'VersionValue','Flags':null,'ScalingID':0,'VersionInfo':{'PracticeSubType':'SubPracticeValue','Version':'VersionValue'}}", 'https://abc/something/API/2021v3/nothing/data'),
    
]

df = spark.createDataFrame(data, schema=schema)
df.display()

 

 

 

 

 

 

 by displaying as an dataframe its no doubt it works fine 

sai_sathya_0-1712850570456.png

and that is how the expected data should be but while writing it into an CSV file in my ADLS it misbehaves and creates an new column for every comma that comes under the JSON column

anyway iam unable to read the csv data i tried display() and show() and when i look into the csv file that generated from the container this is what i was able to find 

sai_sathya_1-1712850991923.png

please help me how to handle this commas . Thnaks

 

6 REPLIES 6

ThomazRossito
Contributor

Hello,

I managed to parse using the "json_tuple" function

There are other functions that can help
schema_of_json
get_json_object
from_json

 

dd = (
df.select(json_tuple(col("AdditionalRequestParameters"), "Locale", "Setversion", "Flags", "ScalingID", "VersionInfo", "SegmentCountry", "KnowledgeType"))
.toDF("Locale", "Setversion", "Flags", "ScalingID", "VersionInfo", "SegmentCountry", "KnowledgeType")
.select("*", json_tuple(col("VersionInfo"), "PracticeSubType", "Version"))
.drop("VersionInfo")
.toDF("Locale", "Setversion", "Flags", "ScalingID", "SegmentCountry", "PracticeSubType", "Version", "KnowledgeType")
.withColumn("SegmentCountry", regexp_replace(col("SegmentCountry"), "[\[\]]", ""))
.select("*", json_tuple(col("SegmentCountry"), "Country", "IndustrySegment"))
.drop("SegmentCountry")
.toDF("Locale", "Setversion", "Flags", "ScalingID", "KnowledgeType", "PracticeSubType", "Version", "Country", "IndustrySegment")
.withColumn("KnowledgeType", regexp_replace(col("KnowledgeType"), "[\[\]]", ""))
.select("*", json_tuple(col("KnowledgeType"), "Name", "Name"))
.drop("KnowledgeType")
.toDF("Locale", "Setversion", "Flags", "ScalingID", "PracticeSubType", "Version", "Country", "IndustrySegment", "Name", "Name1")
)
 
dd.display()
Att.
Thomaz Antonio Rossito Neto
Data Technical Lead / Data Engineer-Architect

OHH ,thankyou for your time, that was working well for but that was extracting the json data from the json column which is ok but our real issue is when we try to write the dataframe into an csv we get values from AdditionalRequestParameters column that gets splitted into many columns due to comma contains inside the data and finally instead of having 4 columns while reading the csv file we get many number of columns . please help me if any technique can be used to handle this issue . thanks

Hello,

You will need to use some CSV configuration parameters

follow the documentation below
https://spark.apache.org/docs/latest/sql-data-sources-csv.html

Att.
Thomaz Antonio Rossito Neto
Data Technical Lead / Data Engineer-Architect

artsheiko
Databricks Employee
Databricks Employee

Hi Sai,

I assume that the problem comes not from the PySpark, but from Excel.

I tried to reproduce the error and didn't find the way - that a good thing, right ? Please try the following : 

 

df.write.format("csv").save("/Volumes/<my_catalog_name>/<my_schema_name>/<my_volume_name>/<file_name>")

 

(how to create a Volume ; Volumes have a lot of value, but if you prefer to use a path in your object storage it's also ok)

Once the file is downloaded locally, open it in Excel. It will not recognize the columns. Click on the Data tab in the ribbon and find the "Text to Columns" button. In the opened dialog : Delimited -> Comma -> Finish.
 
Hope it helps,
 
Best,
Artem

sai_sathya
New Contributor III

sounds an cool option but here we are leveraging Azure Data Lake Storage as an medium of storage and we directly write the data into the preferred location within ADLS so thats where things gets complicated , any idea?

artsheiko
Databricks Employee
Databricks Employee

Regardless on how you create the file (however, take a look at Volumes, I'm 100% you'll see the value), please try to proceed with an approach with Excel I described above

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group