cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Photon enabled UC cluster has less executor memory(1/4th) compared to normal cluster.

Einsatz
New Contributor

I have a Unity Catalog Enabled cluster with Node type Standard_DS4_v2 (28 GB Memory, 8 Cores). When "Use Photon Acceleration" option is disabled spark.executor.memory is 18409m. But if I enable Photon Acceleration it shows spark.executor.memory as 4602m. Due to this most of the code which I have written is failing giving an error

org.apache.spark.memory.SparkOutOfMemoryError: Photon ran out of memory while executing this query.

Photon Enabled Cluster:

  • Spark Version: 13.3.x-photon-scala2.12
  • Executor Memory: 4602m

Photon Disabled Cluster:

  • Spark Version: 13.3.x-scala2.12
  • Executor Memory: 18409m
  1. Why enabling photon reduces the executor memory?
  2. Is there a way to keep spark.executor.memory same as 18409m with photon feature enabled?
4 REPLIES 4

Walter_C
Databricks Employee
Databricks Employee

Enabling Photon Acceleration on your Databricks cluster reduces the available executor memory because Photon uses a different memory management strategy compared to standard Spark. Photon is designed to optimize performance by leveraging the underlying hardware more efficiently, but this comes at the cost of reduced memory allocation for Spark executors.

To address the issue of reduced executor memory when Photon is enabled, you can try the following approaches:

  1. Increase the Node Size: Upgrade your cluster to use larger node types with more memory. For example, you can switch from Standard_DS4_v2 to Standard_DS5_v2, which provides more memory and CPU resources.

  2. Adjust Spark Configuration: You can fine-tune Spark configurations to optimize memory usage. For instance, increasing the number of shuffle partitions can help distribute the workload more evenly and reduce memory pressure on individual executors. You can set this configuration at the cluster level:

     

    spark.sql.shuffle.partitions 1000

VZLA
Databricks Employee
Databricks Employee

@Einsatz thanks for your question!

1) Why does enabling Photon reduce the executor memory?
Photon allocates a significant portion of memory off-heap for its C++ engine. As a result, the on-heap memory (shown by spark.executor.memory) appears lower once Photon is enabled.

2) Is there a way to keep spark.executor.memory at 18409m when Photon is enabled?
Not directly. You must either increase your node’s total memory (e.g., choose a larger instance type) or adjust off-heap allocations to accommodate Photon’s requirements.

Photon is a separate C++ engine embedded within Spark to accelerate certain SQL workloads. So, it requires its own memory space, therefore you can either provision extra memory for Photon or run those queries in the regular Spark engine with full on-heap capacity. It comes with a cost which you need to balance and account accordingly.

Einsatz
New Contributor

@VZLA/@Walter_C   Thanks for the quick answers! I understand that the Photon engine requires memory for its optimization tasks, and this memory usage impacts the executor memory.

I’ve got few more questions, and I’d really appreciate it if you could help me out.

  1. Is the memory allocated to the Photon engine fixed, or is it based on a percentage of the node’s total memory?
  2. How can I calculate the value of spark.executor.memory based on a specific node type? I’ve gone through some articles to understand Spark's memory allocation, but the results don’t match the spark.executor.memory value set by Databricks.
  3. I need clarification on how memory is allocated and the memory values displayed on different tabs of the Databricks Spark UI. Below are the configuration values for my node type, Standard_DS4_v2 (28GB RAM, 8 cores).
    1. What does the 'Storage Memory' column in the Spark UI -> Executors represent? In my case, it shows 9.4GB. I assume this is half of 18409m, so does it indicate only the storage memory portion of the executor, which is 50% of the total executor memory? If so, can I conclude that the remaining 9.4GB is used for execution memory?
    2. What is spark.executor.memory (18409m = 17.97GB)? How can I calculate this value based on a specific node type X (similar to question 2 asked above)?
    3. What does the 'Memory' column in the Spark compute UI - Master -> Workers represent? It's showing 22.5GiB (18.0GiB used). I assume 18.0GiB corresponds to 18409m, but what does the 22.5GiB indicate, considering the node memory is 28GB?
    4. What does the 'Memory per Executor' column in the Spark compute UI - Master -> Running Applications refer to? It shows 18409m. Is this the same as the value in question 2.

Walter_C
Databricks Employee
Databricks Employee

The memory allocated to the Photon engine is not fixed; it is based on a percentage of the node’s total memory.

To calculate the value of spark.executor.memory based on a specific node type, you can use the following formula:

container_size = (vm_size * 0.97 - 4800MB)
spark.executor.memory = (0.8 * container_size)

For your node type, Standard_DS4_v2 (28GB RAM, 8 cores), the calculation would be: container_size = (28GB * 0.97 - 4800MB)
spark.executor.memory = (0.8 * container_size)

This results in approximately 17.97GB (18409m).

Regarding the 'Storage Memory' column in the Spark UI -> Executors, it represents the amount of memory allocated for storage (caching) within the executor. In your case, it shows 9.4GB, which is half of the total executor memory (18409m). This indicates that 50% of the total executor memory is allocated for storage memory, and the remaining 50% is used for execution memory.

The 'Memory' column in the Spark compute UI - Master -> Workers represents the total memory allocated to the worker node. The 22.5GiB (18.0GiB used) indicates that 18.0GiB corresponds to the spark.executor.memory value (18409m), and the remaining memory is used by other processes and overheads.

The 'Memory per Executor' column in the Spark compute UI - Master -> Running Applications refers to the memory allocated per executor, which in your case is 18409m. This is the same value as the spark.executor.memory calculated above

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group