I am currently managing nearly 300 tables from a production database and considering moving the entire ETL process away from Azure Data Factory to Databricks.
This process, which involves extraction, transformation, testing, and loading, is executed daily.
Given this context, I am unsure whether it's more efficient to:
- Create 300 individual notebooks or Python scriptsโone for each tableโproviding great isolation and easier debugging if something breaks.
- Implement a single script with a loop that processes all tables, potentially simplifying management but increasing complexity in debugging.
My questions are:
- Which approach would you recommend in this situation?
- Are there any better alternatives that I might be overlooking?
- Is there a real benefit over .py scripts vs notebooks? I'm considering sticking to notebooks as I find it easier to debug (can run things cell by cell) for any newbies we might be onboarding in the future.
- Is it optimal to create very long loops in Spark/Databricks?
Additional context:
- Data is around 50GB.
- We're using a Standard spark instance on Azure.
- We're writing onto ADLS Gen2
Thank you for your insights!