cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How to import data and apply multiline and charset UTF8 at the same time?

HafidzZulkifli
New Contributor II

I'm running Spark 2.2.0 at the moment. Currently I'm facing an issue when importing data of Mexican origin, where the characters can have special characters and with multiline for certain columns.

Ideally, this is the command I'd like to run:

T_new_exp = spark.read\   
.option("charset", "ISO-8859-1")\   
.option("parserLib", "univocity")\
.option("multiLine", "true")\   
.schema(schema)\
.csv(file)

However, using the above gives me properly lined rows but without the correct charset. Instead of displaying e acute for example, I'm getting the replacement character (U+FFFD). It's only when I remove the multiline option do I get the right charset (but without the multiline issue being fix).

The only solution that I have to workaround this problem for now is to preprocess the data separately before it is loaded to databricks; that is - fix the multiline first in unix and let Databricks handle the unicode issues later.

Is there a simpler way than this?

8 REPLIES 8

kali_tummala
New Contributor II

Did you tired encoding option ? .option("encoding", "UTF-8") .csv(inputPath)

,

did you tried utf8 option ?

.option("encoding", "UTF-8") .csv(inputPath)

kali_tummala
New Contributor II

@Hafidz Zulkifli​ check my answer

HafidzZulkifli
New Contributor II

@kali.tummala@gmail.com​  Tried it just now. It didn't work. There are two parts to the problem - one is handling multiline. The other is to handle differing charset.

sean_owen
Honored Contributor II
Honored Contributor II

Are you sure it's the parsing that's the issue, and not simply the display?

Smruti
New Contributor II

Hi ,

Did anyone find any solution for this.

nsuguru310
New Contributor II

Please make sure you are using or enforcing python 3. python 2 is default and it will have issues with encoding

MikeDuwee
New Contributor II

.option("charset", "iso-8859-1")

.option("multiLine", True)

.option("lineSep ",'\n\r')

DianGermishuize
New Contributor II

You could also potentially use the .withColumns() function on the data frame, and use the pyspark.sql.functions.encode function to convert the characterset to the one you need.

Convert the Character Set/Encoding of a String field in a PySpark DataFrame on Databricks - diangerm...

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.