<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Databricks cell-level code parallel execution through the Python threading library in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/databricks-cell-level-code-parallel-execution-through-the-python/m-p/67674#M33411</link>
    <description>&lt;P&gt;Hi Team,&lt;/P&gt;&lt;P&gt;We are currently planning to&amp;nbsp; implement Databricks cell-level code parallel execution through the Python threading library. We are interested in comprehending the resource consumption and allocation process from the cluster. Are there any potential implications or challenges regarding resources if we proceed with this method?&lt;/P&gt;&lt;P&gt;Below is the&amp;nbsp; code snippet for your reference.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;import threading&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;def table_creation(sql_statement):&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; spark.sql(sql_statement)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;s1 = &lt;SPAN&gt;"""&lt;SPAN&gt;CREATE TABLE a1(time timestamp)&lt;SPAN&gt;"""&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;s2 = &lt;SPAN&gt;"""&lt;SPAN&gt;CREATE TABLE b1(time timestamp)&lt;SPAN&gt;"""&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;try:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_a_thread = threading.Thread(target=table_creation, args=(s1,))&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_b_thread = threading.Thread(target=table_creation, args=(s2,))&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_a_thread.start()&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_b_thread.start()&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_a_thread.join()&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_b_thread.join()&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;except &lt;SPAN&gt;Exception &lt;SPAN&gt;as e:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; print(e)&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Janga&lt;/P&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;</description>
    <pubDate>Tue, 30 Apr 2024 13:06:45 GMT</pubDate>
    <dc:creator>Phani1</dc:creator>
    <dc:date>2024-04-30T13:06:45Z</dc:date>
    <item>
      <title>Databricks cell-level code parallel execution through the Python threading library</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-cell-level-code-parallel-execution-through-the-python/m-p/67674#M33411</link>
      <description>&lt;P&gt;Hi Team,&lt;/P&gt;&lt;P&gt;We are currently planning to&amp;nbsp; implement Databricks cell-level code parallel execution through the Python threading library. We are interested in comprehending the resource consumption and allocation process from the cluster. Are there any potential implications or challenges regarding resources if we proceed with this method?&lt;/P&gt;&lt;P&gt;Below is the&amp;nbsp; code snippet for your reference.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;import threading&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;def table_creation(sql_statement):&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; spark.sql(sql_statement)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;s1 = &lt;SPAN&gt;"""&lt;SPAN&gt;CREATE TABLE a1(time timestamp)&lt;SPAN&gt;"""&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;s2 = &lt;SPAN&gt;"""&lt;SPAN&gt;CREATE TABLE b1(time timestamp)&lt;SPAN&gt;"""&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;try:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_a_thread = threading.Thread(target=table_creation, args=(s1,))&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_b_thread = threading.Thread(target=table_creation, args=(s2,))&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_a_thread.start()&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_b_thread.start()&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_a_thread.join()&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; notebook_b_thread.join()&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;except &lt;SPAN&gt;Exception &lt;SPAN&gt;as e:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; print(e)&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Janga&lt;/P&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Tue, 30 Apr 2024 13:06:45 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-cell-level-code-parallel-execution-through-the-python/m-p/67674#M33411</guid>
      <dc:creator>Phani1</dc:creator>
      <dc:date>2024-04-30T13:06:45Z</dc:date>
    </item>
  </channel>
</rss>

