cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to import data and apply multiline and charset UTF8 at the same time?

HafidzZulkifli
New Contributor II

I'm running Spark 2.2.0 at the moment. Currently I'm facing an issue when importing data of Mexican origin, where the characters can have special characters and with multiline for certain columns.

Ideally, this is the command I'd like to run:

T_new_exp = spark.read\   
.option("charset", "ISO-8859-1")\   
.option("parserLib", "univocity")\
.option("multiLine", "true")\   
.schema(schema)\
.csv(file)

However, using the above gives me properly lined rows but without the correct charset. Instead of displaying e acute for example, I'm getting the replacement character (U+FFFD). It's only when I remove the multiline option do I get the right charset (but without the multiline issue being fix).

The only solution that I have to workaround this problem for now is to preprocess the data separately before it is loaded to databricks; that is - fix the multiline first in unix and let Databricks handle the unicode issues later.

Is there a simpler way than this?

8 REPLIES 8

kali_tummala
New Contributor II

Did you tired encoding option ? .option("encoding", "UTF-8") .csv(inputPath)

,

did you tried utf8 option ?

.option("encoding", "UTF-8") .csv(inputPath)

kali_tummala
New Contributor II

@Hafidz Zulkifli​ check my answer

HafidzZulkifli
New Contributor II

@kali.tummala@gmail.com​  Tried it just now. It didn't work. There are two parts to the problem - one is handling multiline. The other is to handle differing charset.

sean_owen
Databricks Employee
Databricks Employee

Are you sure it's the parsing that's the issue, and not simply the display?

Smruti
New Contributor II

Hi ,

Did anyone find any solution for this.

nsuguru310
New Contributor II

Please make sure you are using or enforcing python 3. python 2 is default and it will have issues with encoding

MikeDuwee
New Contributor II

.option("charset", "iso-8859-1")

.option("multiLine", True)

.option("lineSep ",'\n\r')

DianGermishuize
New Contributor II

You could also potentially use the .withColumns() function on the data frame, and use the pyspark.sql.functions.encode function to convert the characterset to the one you need.

Convert the Character Set/Encoding of a String field in a PySpark DataFrame on Databricks - diangerm...

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group