Hey everyone, I'm experiementing with running containerized pyspark jobs in Databricks, and orchestrating them with airflow. I am however, encountering an issue here. When I trigger an airflow DAG, and I look at the logs, I see that airflow is spinning up multiple jobs in its first try.
It's really strange. Attached a screenshot - in this instance, I need the job to run only once, but here 3 jobs are being run.
Wanted to know if this has been encountered before and any fixes, and will send cluster and pool config details upon request.