11-18-2023 07:28 AM
The traditional advice seems to be to run the driver on "on demand", and optionally the workers on spot. And this is indeed what happends if one chooses to run with spot instances in Databricks. But I am interested in what happens if we run with a driver which gets evicted? Can we end up with corrupt data?
We have some batch jobs which run as structured streaming every night. They seem like prime candidates to run on 100% spot with retries, but I want to understand why this is not a more common pattern first.
11-19-2023 11:33 PM
Hi @Erik, Certainly! Let’s delve into the behaviour of driver and worker nodes in Databricks, especially when it comes to spot instances:
Driver Node Failure:
Worker Node Failure:
Spot Instances for Workers:
Structured Streaming Jobs:
Why Not More Common?:
In summary, consider using spot instances for workers while ensuring the driver runs on demand. This approach strikes a balance between cost efficiency and reliability.
Happy streaming! 🌟
11-24-2023 01:08 AM
Thanks for your answer @Kaniz_Fatma ! Good overview, and I understand that "driver on-demand and the rest on spot" is a good generall advice. But I am still considering using spot instances for both, and I am left with two concrete questions:
1: Can we end up in a corrupt state if the driver is reclaimed? There are many other scenarios in which a driver can crash/turn off etc, so I assume spark is written to handle this without eating our data, is this correct? (I understand that software can have bugs, my question is if spark is **intended** to be able to handle the case of a driver failure withouth corrupting data, not if you can guarantee that it will actually work in all cases).
2: If we use databricks workflows with retries on the job, and a driver gets reclaimed, will the job get retried? And does it count towards the max retries?
11-26-2023 11:49 PM
Hi @Erik,
Certainly! Let’s delve into your questions about Spark and Databricks workflows:
Driver Reclamation and Data Integrity:
Databricks Workflows and Retries:
Remember that while Spark and Databricks provide robust mechanisms for handling failures, it’s essential to design your workflows carefully and consider factors like checkpointing, data durability, and fault tolerance to ensure data integrity and reliability. 🚀.
Excited to expand your horizons with us? Click here to Register and begin your journey to success!
Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!