Hi @iskidet_glenny
Yes, running multiple instances of a Databricks job with different parameters is a common and solid approach especially when it comes to backfilling data.
So usually, we set up one job and just pass in different parameters each time we run it. No need to create a bunch of separate jobs. Then we trigger the runs using the Databricks API. That way, we can start many runs at once, each with its own settings.
All the jobs run in parallel, on their own. If they’re writing to the same place, just make sure they don’t mess each other up or overwrite anything.
A few things to watch out for your cluster should be able to handle the load if we are running a lot of jobs at the same time. And it’s always a good idea to check each job’s logs and status to catch any issues.