cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Best practices for optimizing Spark jobs

chris0991
New Contributor III

What are some best practices for optimizing Spark jobs in Databricks, especially when dealing large datasets? Any tips or resources would be greatly appreciated! Iโ€™m trying to analyze data on restaurant menu prices so that insights would be especially helpful!

2 REPLIES 2

-werners-
Esteemed Contributor III

There are so many.
Here are a few:
- look for data skew
- shuffle as less as possible
- avoid many small files
- use spark and not only pure python
- if using an autoscale cluster: check if you don't lose a lot of time scaling up/down

Jhon
New Contributor II

Great question! For optimizing Spark jobs in Databricks, try these tips:

  1. Efficient partitioning to reduce shuffle times.
  2. Caching for repeated datasets.
  3. Broadcast joins for small tables.
  4. Tune Spark configurations like spark.sql.shuffle.partitions.

If you're analyzing restaurant menu data, Ambersmenu.com.ph has some useful insights on organizing and optimizing such datasets. Worth checking out!

Hope this helps!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group