I would go with @Kaniz Fatma approach and download data in Data Factory and after is downloaded on success trigger databricks spark notebook. With spark you can read also compressed data so maybe you will not need to do even separate unzip.
My blog: https://databrickster.medium.com/