cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Executors Getting FORCE_KILL After Migration to GCE – Resource Scaling Not Helping

minhhung0507
Valued Contributor

Hi everyone,

We're facing a persistent issue with our production streaming pipelines where executors are being forcefully killed with the following error:

Executor got terminated abnormally due to FORCE_KILL

📌 Screenshot for reference:

minhhung0507_1-1750841419951.pngminhhung0507_2-1750841451453.png

Context:

  • Our pipelines create streaming tables using Delta Live Tables.

  • This issue only started happening after Databricks migrated from GKE to GCE.

  • We initially ran the job on 2 workers with 16 cores each, but due to failures, we tried scaling up gradually:

    • 3×16-core

    • 2×32-core (equivalent to 4×16-core)

    • even tried 5×32-core workers.

  • Despite the aggressive scaling, executors still get force-killed.

  • When we monitor resource usage, we notice executors are only using ~70% CPU, and the job is killed before even completing the first batch.

Questions:

  1. Has anyone experienced a similar behavior after the move to GCE?

  2. What could be causing FORCE_KILL on relatively idle executors (only ~70% utilization)?

  3. Are there known configurations or cluster policies in GCE that could trigger such early termination?

  4. Could this be related to DLT’s retry policy or hidden limits at the infrastructure level?

Any insights or recommendations are greatly appreciated!

Thanks in advance.

Regards,
Hung Nguyen
2 REPLIES 2

thomas-totter
New Contributor III

We have the exact same issue since very recently, but we are on Azure...

thomas-totter
New Contributor III

@minhhung0507 wrote:

We're facing a persistent issue with our production streaming pipelines where executors are being forcefully killed with the following error:

Executor got terminated abnormally due to FORCE_KILL

I solved the issue in our case and also think that I know now why it happened in the first place. In our DLT workload the amount of state information is unusually (at least i think so) high compared to the volume of data that we are processing in total. I learned meanwhile that RocksDB (in which the state information is stored) operates outside JVM, meaning it uses the workers non-heap memory. And if the non-heap memory consumption goes up too high your worker will just be killed by its OS.

I changed several settings in spark.conf, so i can't tell you which one exactly solved our issue, or if it was all of them in combination, but here is what i changed:

  1. allocated more non-heap memory to workers (see Spark documentation)
  2. limited RocksDB memory usage and tuned some other RocksDB related settings

The settings I'm talking about in (2) can be found here:
rocksdb-101-optimizing-stateful-streaming-in-apache-spark-with-amazon-emr-and-aws-glue 
I found this resource extremely helpful for the explanations it provides as well as the suggested "defaults".

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now