<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Load an explicit schema from an external metadata.csv file or a json file for reading csv's into dataframe in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/load-an-explicit-schema-from-an-external-metadata-csv-file-or-a/m-p/17676#M11644</link>
    <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;I have a metadata csv file which contains column name, and datatype such as&lt;/P&gt;
&lt;P&gt;Colm1: INT&lt;/P&gt;
&lt;P&gt;Colm2: String.&lt;/P&gt;
&lt;P&gt;I can also get the same in a json format as shown:&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I can store this on ADLS. How can I convert this into a schema like: "Myschema" that I can then pass during spark.read.format("csv") method while reading the datafile for the same metadata? When I infer schema for the datafile csv for multiple incremental files , I get clashes while writing into delta such as &lt;/P&gt;
&lt;P&gt;"Failed to merge fields 'Colm1' and 'Colm1'. Failed to merge incompatible data types IntegerType and StringType &lt;/P&gt;
&lt;P&gt;Any pointers/notes would be appreciated.&lt;/P&gt;
&lt;P&gt;Thanks!&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Thu, 15 Jul 2021 11:45:44 GMT</pubDate>
    <dc:creator>AnandNair</dc:creator>
    <dc:date>2021-07-15T11:45:44Z</dc:date>
    <item>
      <title>Load an explicit schema from an external metadata.csv file or a json file for reading csv's into dataframe</title>
      <link>https://community.databricks.com/t5/data-engineering/load-an-explicit-schema-from-an-external-metadata-csv-file-or-a/m-p/17676#M11644</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;I have a metadata csv file which contains column name, and datatype such as&lt;/P&gt;
&lt;P&gt;Colm1: INT&lt;/P&gt;
&lt;P&gt;Colm2: String.&lt;/P&gt;
&lt;P&gt;I can also get the same in a json format as shown:&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I can store this on ADLS. How can I convert this into a schema like: "Myschema" that I can then pass during spark.read.format("csv") method while reading the datafile for the same metadata? When I infer schema for the datafile csv for multiple incremental files , I get clashes while writing into delta such as &lt;/P&gt;
&lt;P&gt;"Failed to merge fields 'Colm1' and 'Colm1'. Failed to merge incompatible data types IntegerType and StringType &lt;/P&gt;
&lt;P&gt;Any pointers/notes would be appreciated.&lt;/P&gt;
&lt;P&gt;Thanks!&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 15 Jul 2021 11:45:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/load-an-explicit-schema-from-an-external-metadata-csv-file-or-a/m-p/17676#M11644</guid>
      <dc:creator>AnandNair</dc:creator>
      <dc:date>2021-07-15T11:45:44Z</dc:date>
    </item>
  </channel>
</rss>

