cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to resolve our of memory error?

Bujji
New Contributor II

Hi, I am working as azure support engineer

I found this error while I am checking the pipeline failure, and showing below error

"org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 72403.0 failed 4 times, most recent failure: Lost task 0.3 in stage 72403.0 (TID 801507, 10.139.64.5, executor 169): org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0"

Py4JJavaError Traceback (most recent call last)

<command-2313153849666105> in create_destination(location)

154 try:

--> 155 sql_df = spark.sql(sql_query)

156 break

/databricks/spark/python/pyspark/sql/session.py in sql(self, sqlQuery)

708 """

--> 709 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)

710

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in __call__(self, *args)

1304 return_value = get_return_value(

-> 1305 answer, self.gateway_client, self.target_id, self.name)

1306

org.apache.spark.memory.TaskMemoryManager.allocatePage(TaskMemoryManager.java:289)

at org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:116)

at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:419)

at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:443)

at org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:138)

at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:241)

at org.apache.spark.sql.execution.SortExec$$anon$2.sortedIterator(SortExec.scala:133)

at org.apache.spark.sql.execution.SortExec$$anon$2.hasNext(SortExec.scala:147)

at org.apache.spark.sql.execution.window.WindowExec$$anon$1.fetchNextRow(WindowExec.scala:185)

at org.apache.spark.sql.execution.window.WindowExec$$anon$1.<init>(WindowExec.scala:194)

at org.apache.spark.sql.execution.window.WindowExec.$anonfun$doExecute$3(WindowExec.scala:168)

at org.apache.spark.sql.execution.window.WindowExec.$anonfun$doExecute$3$adapted(WindowExec.scala:167)

at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2(RDD.scala:866)

at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2$adapted(RDD.scala:866)

at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)

at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)

at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)

at org.apache.spark.rdd.RDD.$anonfun$getOrCompute$1(RDD.scala:369)

at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$6(BlockManager.scala:1414)

at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$6$adapted(BlockManager.scala:1412)

at org.apache.spark.storage.DiskStore.put(DiskStore.scala:70)

at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1412)

1 REPLY 1

Pat
Honored Contributor III

Hi @mahesh bmk​ ,

It would be nice to see the sql_query.

is there some window function used? You might try to run this on bigger cluster.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group