09-23-2019 12:16 AM
i am running spark 2.4.4 with python 2.7 and IDE is pycharm.
The Input file (.csv) contain encoded value in some column like given below.
File data looks
COL1,COL2,COL3,COL4
CM, 503004, (d$όνυ$F|'.h*Λ!ψμ=(.ξ; ,.ʽ|!3-2-704
The output i am trying to get is
CM,503004,,3-2-704 ---- all encoded and ascii value removed.
code i tried :
from pyspark.sql import SparkSession spark = SparkSession.builder.appName("Python Spark").getOrCreate() df = spark.read.csv("filepath\Customers_v01.csv",header=True,sep=","); myres = df.rdd.map(lambda x: x[1].encode().decode('utf-8')) print(myres.collect())
but this is giving only
503004 -- printing only col2 value.
Please share your suggestion , is it possible to fix the issue in pyspark.
Thanks a lot
09-23-2019 12:57 AM
Hi @Rohini Mathur, use below code on column containing non-ascii and special characters.
df['column_name'].str.encode('ascii', 'ignore').str.decode('ascii')
09-23-2019 02:15 AM
@Shyamprasad Miryala : Thanks a lot... can we define multiple column in column name with comma ','
09-23-2019 02:21 AM
@Shyamprasad Miryala : i did like this myres=df['COLC'].str.encode('ascii', 'ignore').str.decode('ascii') but getting error like pyspark.sql.utils.AnalysisException: u'Cannot resolve column name "" among (colA, (colB, (colC);'. please help
09-23-2019 07:54 AM
It was because of the incorrect structure of the CSV file. Remove the white spaces from the CSV file. Maybe some of the column names contain white spaces before the name itself.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now