<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Understanding Partitions in Spark Local Mode in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/understanding-partitions-in-spark-local-mode/m-p/13504#M8177</link>
    <description>&lt;P&gt;That is a lot of questions in one topic.&lt;/P&gt;&lt;P&gt;Let's give it a try:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[1] this all depends on the values of the concerning parameters and the program you run&lt;/P&gt;&lt;P&gt;(think joins, unions, repartition etc)&lt;/P&gt;&lt;P&gt;[2] spark.default.parallelism is by default the number of cores * 2&lt;/P&gt;&lt;P&gt;[3] Adaptive Query Execution (AQE) is an optimization technique in Spark SQL that makes use of the runtime statistics to choose the most efficient query execution plan.&lt;/P&gt;&lt;P&gt;AQE does not just decide the number of partitions.&lt;/P&gt;&lt;P&gt;&lt;A href="https://spark.apache.org/docs/latest/sql-performance-tuning.html" target="test_blank"&gt;https://spark.apache.org/docs/latest/sql-performance-tuning.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;[4] no idea, perhaps it is buffered/cached somewhere&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 15 Oct 2021 08:24:59 GMT</pubDate>
    <dc:creator>-werners-</dc:creator>
    <dc:date>2021-10-15T08:24:59Z</dc:date>
    <item>
      <title>Understanding Partitions in Spark Local Mode</title>
      <link>https://community.databricks.com/t5/data-engineering/understanding-partitions-in-spark-local-mode/m-p/13502#M8175</link>
      <description>&lt;P&gt;I have few fundamental questions in Spark3 while running a simple Spark app in my local mac machine (with 6 cores in total). Please help.&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;local[*] runs my Spark application in local mode with all the cores present on my mac, correct? It also means that the dataframe will have as many partitions as the number of cores available to the master, correct?&lt;/LI&gt;&lt;LI&gt;When I run with local[*]&lt;I&gt;, I get 12 as the defaultParallelism for SparkContext. When I run with any number like local[2],local[4] etc. then I get the same number as the defaultParallelism for SparkContext. Why is that? Does Spark calculate defaultParallelism as total number of cores * 2 when it sees master as local[*]&lt;/I&gt; otherwise it keeps it equal to the input that I gave?&lt;/LI&gt;&lt;LI&gt;With or without Adaptive Query Execution enabled, I see number of partitions in my dataframe as 12 when master is local[*] whereas it's 7 when master is local[4]. I thought AQE is to decide correct number of partitions in Spark 3. Is that not right? Why does it take 7 as the number of partitions when master is local[4]?&lt;/LI&gt;&lt;LI&gt;I am able to print Spark version and sc.master even after stopping spark or sc. Why? Shouldn't I get an error that the connection is no longer active?&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Below is the code:&lt;/P&gt;&lt;P&gt;from pyspark.sql import SparkSession&lt;/P&gt;&lt;P&gt;spark = SparkSession.builder.master("local[*]").appName("test").getOrCreate()&lt;/P&gt;&lt;P&gt;print("Spark Version:",spark.version)&lt;/P&gt;&lt;P&gt;sc = spark.sparkContext&lt;/P&gt;&lt;P&gt;print('Master :',sc.master)&lt;/P&gt;&lt;P&gt;print('Default Parallelism :',sc.defaultParallelism)&lt;/P&gt;&lt;P&gt;print('AQE Enabled :',spark.conf.get('spark.sql.adaptive.enabled'))&lt;/P&gt;&lt;P&gt;spark.conf.set('spark.sql.adaptive.enabled','true')&lt;/P&gt;&lt;P&gt;print('AQE Enabled :',spark.conf.get('spark.sql.adaptive.enabled'))&lt;/P&gt;&lt;P&gt;df = spark.read.load("/Users/user/xyz.csv",&lt;/P&gt;&lt;P&gt;                     format="csv", sep=",", inferSchema="true", header="true")&lt;/P&gt;&lt;P&gt;print('No of partitions in the dataframe :',df.rdd.getNumPartitions())&lt;/P&gt;&lt;P&gt;print(sc.uiWebUrl)&lt;/P&gt;&lt;P&gt;spark.stop()&lt;/P&gt;&lt;P&gt;print("Spark Version:",spark.version)&lt;/P&gt;&lt;P&gt;print('Master :',sc.master)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 13 Oct 2021 23:02:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/understanding-partitions-in-spark-local-mode/m-p/13502#M8175</guid>
      <dc:creator>Personal1</dc:creator>
      <dc:date>2021-10-13T23:02:55Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding Partitions in Spark Local Mode</title>
      <link>https://community.databricks.com/t5/data-engineering/understanding-partitions-in-spark-local-mode/m-p/13503#M8176</link>
      <description>&lt;P&gt;Hello there,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My name is Piper and I'm one of the community moderators. Thank you for your questions. They look like good ones! Let's see how the community responds first and then we'll see if we need the team to follow up.&lt;/P&gt;</description>
      <pubDate>Thu, 14 Oct 2021 17:27:42 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/understanding-partitions-in-spark-local-mode/m-p/13503#M8176</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2021-10-14T17:27:42Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding Partitions in Spark Local Mode</title>
      <link>https://community.databricks.com/t5/data-engineering/understanding-partitions-in-spark-local-mode/m-p/13504#M8177</link>
      <description>&lt;P&gt;That is a lot of questions in one topic.&lt;/P&gt;&lt;P&gt;Let's give it a try:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[1] this all depends on the values of the concerning parameters and the program you run&lt;/P&gt;&lt;P&gt;(think joins, unions, repartition etc)&lt;/P&gt;&lt;P&gt;[2] spark.default.parallelism is by default the number of cores * 2&lt;/P&gt;&lt;P&gt;[3] Adaptive Query Execution (AQE) is an optimization technique in Spark SQL that makes use of the runtime statistics to choose the most efficient query execution plan.&lt;/P&gt;&lt;P&gt;AQE does not just decide the number of partitions.&lt;/P&gt;&lt;P&gt;&lt;A href="https://spark.apache.org/docs/latest/sql-performance-tuning.html" target="test_blank"&gt;https://spark.apache.org/docs/latest/sql-performance-tuning.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;[4] no idea, perhaps it is buffered/cached somewhere&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Oct 2021 08:24:59 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/understanding-partitions-in-spark-local-mode/m-p/13504#M8177</guid>
      <dc:creator>-werners-</dc:creator>
      <dc:date>2021-10-15T08:24:59Z</dc:date>
    </item>
  </channel>
</rss>

