<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How to deal with column name with .(dot) in pyspark dataframe?? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-deal-with-column-name-with-dot-in-pyspark-dataframe/m-p/27375#M19249</link>
    <description>&lt;P&gt;&lt;/P&gt; 
&lt;UL&gt;&lt;LI&gt;We are streaming data from kafka source with json but in some column we are getting .(dot) in column names.&lt;/LI&gt;&lt;LI&gt;streaming json data:&lt;/LI&gt;&lt;/UL&gt; 
&lt;P&gt;df1 = df.selectExpr("CAST(value AS STRING)")&lt;/P&gt; 
&lt;P&gt;&lt;I&gt; {"pNum":"A14","from":"telecom","payload":{"TARGET":"1","COUNTRY":"India","EMAIL.1":"test@test.com","PHONE.1":"1122334455"}}&lt;/I&gt;&lt;/P&gt; 
&lt;UL&gt;&lt;LI&gt;in above json we are getting (EMAIL.1,PHONE.1) with .(dot) name. &lt;/LI&gt;&lt;LI&gt;we are extracting the json data with get_json_object like below but we are getting Email and phone values are null &lt;/LI&gt;&lt;/UL&gt; 
&lt;P&gt;&lt;I&gt;df2 = df1.select(get_json_object(df1["value"], '$.pNum').alias('pNum'), get_json_object(df1["value"], '$.from').alias('from'), get_json_object(df1["value"], '$.payload.TARGET').alias('TARGET'), get_json_object(df1["value"], '$.payload.COUNTRY').alias('COUNTRY'), get_json_object(df1["value"], '$.payload.EMAIL.1').alias('EMAIL'), get_json_object(df1["value"], '$.payload.PHONE.1').alias('PHONE')) &lt;/I&gt;&lt;/P&gt; 
&lt;P&gt;then how to deal with this type of columns name??&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 24 Dec 2019 12:14:09 GMT</pubDate>
    <dc:creator>MithuWagh</dc:creator>
    <dc:date>2019-12-24T12:14:09Z</dc:date>
    <item>
      <title>How to deal with column name with .(dot) in pyspark dataframe??</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-deal-with-column-name-with-dot-in-pyspark-dataframe/m-p/27375#M19249</link>
      <description>&lt;P&gt;&lt;/P&gt; 
&lt;UL&gt;&lt;LI&gt;We are streaming data from kafka source with json but in some column we are getting .(dot) in column names.&lt;/LI&gt;&lt;LI&gt;streaming json data:&lt;/LI&gt;&lt;/UL&gt; 
&lt;P&gt;df1 = df.selectExpr("CAST(value AS STRING)")&lt;/P&gt; 
&lt;P&gt;&lt;I&gt; {"pNum":"A14","from":"telecom","payload":{"TARGET":"1","COUNTRY":"India","EMAIL.1":"test@test.com","PHONE.1":"1122334455"}}&lt;/I&gt;&lt;/P&gt; 
&lt;UL&gt;&lt;LI&gt;in above json we are getting (EMAIL.1,PHONE.1) with .(dot) name. &lt;/LI&gt;&lt;LI&gt;we are extracting the json data with get_json_object like below but we are getting Email and phone values are null &lt;/LI&gt;&lt;/UL&gt; 
&lt;P&gt;&lt;I&gt;df2 = df1.select(get_json_object(df1["value"], '$.pNum').alias('pNum'), get_json_object(df1["value"], '$.from').alias('from'), get_json_object(df1["value"], '$.payload.TARGET').alias('TARGET'), get_json_object(df1["value"], '$.payload.COUNTRY').alias('COUNTRY'), get_json_object(df1["value"], '$.payload.EMAIL.1').alias('EMAIL'), get_json_object(df1["value"], '$.payload.PHONE.1').alias('PHONE')) &lt;/I&gt;&lt;/P&gt; 
&lt;P&gt;then how to deal with this type of columns name??&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Dec 2019 12:14:09 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-deal-with-column-name-with-dot-in-pyspark-dataframe/m-p/27375#M19249</guid>
      <dc:creator>MithuWagh</dc:creator>
      <dc:date>2019-12-24T12:14:09Z</dc:date>
    </item>
    <item>
      <title>Re: How to deal with column name with .(dot) in pyspark dataframe??</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-deal-with-column-name-with-dot-in-pyspark-dataframe/m-p/27376#M19250</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi @Mithu Wagh you can use backticks to enclose the column name.&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;df.select("`col0.1`")&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Mon, 30 Dec 2019 11:27:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-deal-with-column-name-with-dot-in-pyspark-dataframe/m-p/27376#M19250</guid>
      <dc:creator>shyam_9</dc:creator>
      <dc:date>2019-12-30T11:27:03Z</dc:date>
    </item>
  </channel>
</rss>

