cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Failed job with "A fatal error has been detected by the Java Runtime Environment"

Volker
New Contributor III

Hi community,

I have a question regarding an error that I get sometimes when running a job.

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007fc941e74996, pid=940, tid=0x00007fc892dff640
#
# JRE version: OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-linux64) (8.0_382-b05) (build 1.8.0_382-b05)
# Java VM: OpenJDK 64-Bit Server VM (25.382-b05 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# J 33375 C2 org.apache.spark.unsafe.types.UTF8String.hashCode()I (18 bytes) @ 0x00007fc941e74996 [0x00007fc941e748a0+0xf6]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /databricks/driver/hs_err_pid940.log
#
# If you would like to submit a bug report, please visit:
#   http://www.azul.com/support/
#

 Why do I get this error and why does the job run when I restart it? Is there a possibility to remediate this issue?

4 REPLIES 4

Walter_C
Databricks Employee
Databricks Employee

Is this happening regularly? Is this using a job cluster or all purpose cluster?

Volker
New Contributor III

It happened twice now yesterday but I first noticed this yesterday so I cannot tell if this is happening regularly. This is using a job cluster 

Volker
New Contributor III

It happend now multiple times in the last week. Do you have an idea what could cause this problem?

Volker
New Contributor III

In the last run there has been additional information in the error message:

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f168e094210, pid=1002, tid=0x00007f15dd1ff640
#
# JRE version: OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-linux64) (8.0_382-b05) (build 1.8.0_382-b05)
# Java VM: OpenJDK 64-Bit Server VM (25.382-b05 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# J 33949 C2 org.apache.spark.unsafe.types.UTF8String.hashCode()I (18 bytes) @ 0x00007f168e094210 [0x00007f168e0940e0+0x130]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /databricks/driver/hs_err_pid1002.log
Compiled method (c2)  335210 34051       4       org.apache.spark.sql.catalyst.util.ArrayData::foreach (64 bytes)
 total in heap  [0x00007f168d6b4610,0x00007f168d6b52b0] = 3232
 relocation     [0x00007f168d6b4738,0x00007f168d6b47b8] = 128
 main code      [0x00007f168d6b47c0,0x00007f168d6b4ca0] = 1248
 stub code      [0x00007f168d6b4ca0,0x00007f168d6b4ce8] = 72
 oops           [0x00007f168d6b4ce8,0x00007f168d6b4d00] = 24
 metadata       [0x00007f168d6b4d00,0x00007f168d6b4dc0] = 192
 scopes data    [0x00007f168d6b4dc0,0x00007f168d6b50f8] = 824
 scopes pcs     [0x00007f168d6b50f8,0x00007f168d6b51d8] = 224
 dependencies   [0x00007f168d6b51d8,0x00007f168d6b51e8] = 16
 handler table  [0x00007f168d6b51e8,0x00007f168d6b5290] = 168
 nul chk table  [0x00007f168d6b5290,0x00007f168d6b52b0] = 32
#
# If you would like to submit a bug report, please visit:
#   http://www.azul.com/support/
#

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group