<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Removing non-ascii and special character in pyspark in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27784#M19632</link>
    <description>&lt;P&gt;@Shyamprasad Miryala​&amp;nbsp;: Thanks a lot... can we define multiple column in column name with comma ','&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 23 Sep 2019 09:15:25 GMT</pubDate>
    <dc:creator>RohiniMathur</dc:creator>
    <dc:date>2019-09-23T09:15:25Z</dc:date>
    <item>
      <title>Removing non-ascii and special character in pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27782#M19630</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;i am running spark 2.4.4 with python 2.7 and IDE is pycharm.&lt;/P&gt;
&lt;P&gt;The Input file (.csv) contain encoded value in some column like given below.&lt;/P&gt;
&lt;P&gt;File data looks &lt;/P&gt;
&lt;P&gt;COL1,COL2,COL3,COL4 &lt;/P&gt;
&lt;P&gt;CM, 503004, (d$όνυ$F|'.h*Λ!ψμ=(.ξ; ,.ʽ|!3-2-704&lt;/P&gt;
&lt;P&gt;The output i am trying to get is&lt;/P&gt;
&lt;P&gt;CM,503004,,3-2-704 ---- all encoded and ascii value removed.&lt;/P&gt;
&lt;P&gt;code i tried :&lt;/P&gt;
&lt;P&gt;from pyspark.sql import SparkSession spark = SparkSession.builder.appName("Python Spark").getOrCreate() df = spark.read.csv("filepath\Customers_v01.csv",header=True,sep=","); myres = df.rdd.map(lambda x: x[1].encode().decode('utf-8')) print(myres.collect())&lt;/P&gt;
&lt;P&gt;but this is giving only &lt;/P&gt;
&lt;P&gt;503004 -- printing only col2 value.&lt;/P&gt;
&lt;P&gt;Please share your suggestion , is it possible to fix the issue in pyspark.&lt;/P&gt;
&lt;P&gt;Thanks a lot&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Sep 2019 07:16:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27782#M19630</guid>
      <dc:creator>RohiniMathur</dc:creator>
      <dc:date>2019-09-23T07:16:16Z</dc:date>
    </item>
    <item>
      <title>Re: Removing non-ascii and special character in pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27783#M19631</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi @Rohini Mathur, use below code on column containing non-ascii and special characters.&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;df['column_name'].str.encode('ascii', 'ignore').str.decode('ascii')&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Mon, 23 Sep 2019 07:57:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27783#M19631</guid>
      <dc:creator>shyam_9</dc:creator>
      <dc:date>2019-09-23T07:57:02Z</dc:date>
    </item>
    <item>
      <title>Re: Removing non-ascii and special character in pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27784#M19632</link>
      <description>&lt;P&gt;@Shyamprasad Miryala​&amp;nbsp;: Thanks a lot... can we define multiple column in column name with comma ','&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Sep 2019 09:15:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27784#M19632</guid>
      <dc:creator>RohiniMathur</dc:creator>
      <dc:date>2019-09-23T09:15:25Z</dc:date>
    </item>
    <item>
      <title>Re: Removing non-ascii and special character in pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27785#M19633</link>
      <description>&lt;P&gt;@Shyamprasad Miryala​&amp;nbsp;: i did like this myres=df['COLC'].str.encode('ascii', 'ignore').str.decode('ascii') but getting error like pyspark.sql.utils.AnalysisException: u'Cannot resolve column name "" among (colA, (colB, (colC);'. please help&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Sep 2019 09:21:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27785#M19633</guid>
      <dc:creator>RohiniMathur</dc:creator>
      <dc:date>2019-09-23T09:21:03Z</dc:date>
    </item>
    <item>
      <title>Re: Removing non-ascii and special character in pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27786#M19634</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;P&gt;It was because of the incorrect structure of the CSV file. Remove the white spaces from the CSV file. Maybe some of the column names contain white spaces before the name itself.&lt;/P&gt;</description>
      <pubDate>Mon, 23 Sep 2019 14:54:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/removing-non-ascii-and-special-character-in-pyspark/m-p/27786#M19634</guid>
      <dc:creator>shyam_9</dc:creator>
      <dc:date>2019-09-23T14:54:33Z</dc:date>
    </item>
  </channel>
</rss>

