Another approach you might consider is creating a template notebook to query a known date range with widgets. For example, two date widgets, start time and end time. Then from there you could use Databricks Jobs to update these parameters for each run, and this way it will spin up a cluster for each date range, and you could run all of those clusters in parallel as well.