cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Best Approach for Handling ETL Processes in Databricks

itsmejoeyong
New Contributor II

I am currently managing nearly 300 tables from a production database and considering moving the entire ETL process away from Azure Data Factory to Databricks.

This process, which involves extraction, transformation, testing, and loading, is executed daily.

Given this context, I am unsure whether it's more efficient to:

  1. Create 300 individual notebooks or Python scriptsโ€”one for each tableโ€”providing great isolation and easier debugging if something breaks.
  2. Implement a single script with a loop that processes all tables, potentially simplifying management but increasing complexity in debugging.

My questions are:

  1. Which approach would you recommend in this situation?
  2. Are there any better alternatives that I might be overlooking?
  3. Is there a real benefit over .py scripts vs notebooks? I'm considering sticking to notebooks as I find it easier to debug (can run things cell by cell) for any newbies we might be onboarding in the future.
  4. Is it optimal to create very long loops in Spark/Databricks?

Additional context:

  • Data is around 50GB.
  • We're using a Standard spark instance on Azure.
  • We're writing onto ADLS Gen2

Thank you for your insights!

1 ACCEPTED SOLUTION

Accepted Solutions

Brahmareddy
Honored Contributor

Hi,

Instead of 300 individual files or one massive script, try grouping similar tables together. For example, you could have 10 scripts, each handling 30 tables. This way, you get the best of both approchesโ€”This way you will have a freedom of easy debugging without having too many files to manage.

Start with Notebooks and once everythingโ€™s running smoothly, consider converting your notebooks into .py scripts. 

One more tip - look into using Delta Lake in Databricks. It makes managing your data easier and more reliable.

Give a try.

View solution in original post

3 REPLIES 3

Brahmareddy
Honored Contributor

Hi,

Instead of 300 individual files or one massive script, try grouping similar tables together. For example, you could have 10 scripts, each handling 30 tables. This way, you get the best of both approchesโ€”This way you will have a freedom of easy debugging without having too many files to manage.

Start with Notebooks and once everythingโ€™s running smoothly, consider converting your notebooks into .py scripts. 

One more tip - look into using Delta Lake in Databricks. It makes managing your data easier and more reliable.

Give a try.

Thank you Brahmareddy!

Not too sure why I never thought of that ๐Ÿ™„!

You are welcome, Joeyong!. Good day.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group