B1123451020-502,"","{""m"": {""difference"": 60}}","","","",2022-02-12T15:40:00.783Z
B1456741975-266,"","{""m"": {""difference"": 60}}","","","",2022-02-04T17:03:59.566Z
B1789753479-460,"","",",","","",2022-02-18T14:46:57.332Z
B1456741977-123,"","{""m"": {""difference"": 60}}","","","",2022-02-04T17:03:59.566Z
df_inputfile = (spark.read.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header","false")
.option("quotedstring",'\"')
.option("escape",'\"')
.option("multiline","true")
.option("delimiter",",")
.load('<path to csv>'))
print(df_inputfile.count()) # Prints 3
print(df_inputfile.distinct().count()) # Prints 4
I'm trying to read the data above from a CSV file and end up with a wrong count, although the dataframe contains all the expected records. df_inputfile.count() prints 3 although it should have been 4.
It looks like this is happening because of the single comma in the 4th column of the 3rd row. Can someone please explain why?