Hi, I work with both, so it depends on the usecase.
- ADF is easy to set up and good for data integration, e.g. "copy data" job to transfer files from storage 1 to storage 2
- ADF data flows (data transformations) can be used to some level, but when the transformations get more complex, I recomment to use Databricks notebooks with PySpark code
- I am not sure how much effort Microsoft will put into ADF data flows, as in Fabric there are data flows gen 2, which are completely different to the data flows in ADF
So, for a easy low-code data ingestion and moderate data transformations I recommend ADF, and for more extensive usecases I recommend Databricks workflows.
You can combine both (Pipeline with ADF that runs a Databricks Notebook) but then you have multiple Azure services you need to take care of in terms of version control and change management.