<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Databricks All Delta Tables Data Read in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79886#M7860</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/105213"&gt;@Krishna2110&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;It's bit unclear to me what is your problem. If you don't use any filter then all data will be read in data frame, as in below.&lt;/P&gt;&lt;P&gt;df = spark.read.delta('my_table')&lt;/P&gt;&lt;P&gt;There is however limitation on number of rows that will be displayed in UI, so maybe you're thinking that not entire data were read?&lt;/P&gt;&lt;P&gt;Or maybe you are asking about situation, where you have set of different tables with the same schema and you would like to query those? Then you can iterate on tables, read each and union results&lt;/P&gt;</description>
    <pubDate>Mon, 22 Jul 2024 15:02:34 GMT</pubDate>
    <dc:creator>szymon_dybczak</dc:creator>
    <dc:date>2024-07-22T15:02:34Z</dc:date>
    <item>
      <title>Databricks All Delta Tables Data Read</title>
      <link>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79884#M7859</link>
      <description>&lt;P&gt;If we want to read all the data of the databricks tables at single time how can we able to do it.&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jul 2024 14:44:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79884#M7859</guid>
      <dc:creator>Krishna2110</dc:creator>
      <dc:date>2024-07-22T14:44:40Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks All Delta Tables Data Read</title>
      <link>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79886#M7860</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/105213"&gt;@Krishna2110&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;It's bit unclear to me what is your problem. If you don't use any filter then all data will be read in data frame, as in below.&lt;/P&gt;&lt;P&gt;df = spark.read.delta('my_table')&lt;/P&gt;&lt;P&gt;There is however limitation on number of rows that will be displayed in UI, so maybe you're thinking that not entire data were read?&lt;/P&gt;&lt;P&gt;Or maybe you are asking about situation, where you have set of different tables with the same schema and you would like to query those? Then you can iterate on tables, read each and union results&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jul 2024 15:02:34 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79886#M7860</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2024-07-22T15:02:34Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks All Delta Tables Data Read</title>
      <link>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79888#M7861</link>
      <description>&lt;P&gt;Thankyou for your input.&lt;/P&gt;&lt;P&gt;In the same catalog if there are 40 tables i want to read the data or either schema of all the tables in the same command cell with the help of pyspark.&lt;/P&gt;&lt;P&gt;I have written this code but it was throwing an error&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;tables &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; spark&lt;/SPAN&gt;&lt;SPAN&gt;.sql&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;SHOW&lt;/SPAN&gt; &lt;SPAN&gt;TABLES&lt;/SPAN&gt; &lt;SPAN&gt;IN&lt;/SPAN&gt; &lt;SPAN&gt;ewt_edp_prod&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;crm_raw&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;for&lt;/SPAN&gt;&lt;SPAN&gt; table &lt;/SPAN&gt;&lt;SPAN&gt;in&lt;/SPAN&gt;&lt;SPAN&gt; tables:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; table_name &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;f&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;{&lt;/SPAN&gt;&lt;SPAN&gt;table.database&lt;/SPAN&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;{&lt;/SPAN&gt;&lt;SPAN&gt;table.name&lt;/SPAN&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;try&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; df &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; spark.&lt;/SPAN&gt;&lt;SPAN&gt;table&lt;/SPAN&gt;&lt;SPAN&gt;(table_name)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; count &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; df.&lt;/SPAN&gt;&lt;SPAN&gt;count&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;print&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;f&lt;/SPAN&gt;&lt;SPAN&gt;"Table &lt;/SPAN&gt;&lt;SPAN&gt;{&lt;/SPAN&gt;&lt;SPAN&gt;table_name&lt;/SPAN&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;SPAN&gt; is accessible and has &lt;/SPAN&gt;&lt;SPAN&gt;{&lt;/SPAN&gt;&lt;SPAN&gt;count&lt;/SPAN&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;SPAN&gt; rows."&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;except&lt;/SPAN&gt; &lt;SPAN&gt;Exception&lt;/SPAN&gt; &lt;SPAN&gt;as&lt;/SPAN&gt;&lt;SPAN&gt; e:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;print&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;f&lt;/SPAN&gt;&lt;SPAN&gt;"Error accessing table &lt;/SPAN&gt;&lt;SPAN&gt;{&lt;/SPAN&gt;&lt;SPAN&gt;table_name&lt;/SPAN&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;SPAN&gt;: &lt;/SPAN&gt;&lt;SPAN&gt;{&lt;/SPAN&gt;&lt;SPAN&gt;e&lt;/SPAN&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Can you help me with this code if i have written somewhere wrong.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 22 Jul 2024 15:10:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79888#M7861</guid>
      <dc:creator>Krishna2110</dc:creator>
      <dc:date>2024-07-22T15:10:44Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks All Delta Tables Data Read</title>
      <link>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79897#M7862</link>
      <description>&lt;P&gt;Yeah, sure. I'll send you code once I'm home&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jul 2024 15:30:17 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79897#M7862</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2024-07-22T15:30:17Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks All Delta Tables Data Read</title>
      <link>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79901#M7863</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/105213"&gt;@Krishna2110&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;Here it is, it should work now&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;tables = spark.sql("SHOW TABLES IN ewt_edp_prod.crm_raw").collect()
for row in tables:
    table_name = f"ewt_edp_prod.{row[0]}.{row[1]}"
    try:
        df = spark.table(table_name)
        count = df.count()
        print(f"Table {table_name} is accessible and has {count} rows.")
    except Exception as e:
        print(f"Error accessing table {table_name}: {e}")&lt;/LI-CODE&gt;</description>
      <pubDate>Mon, 22 Jul 2024 16:00:45 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/databricks-all-delta-tables-data-read/m-p/79901#M7863</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2024-07-22T16:00:45Z</dc:date>
    </item>
  </channel>
</rss>

