cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Certifications
Join dynamic discussions on Databricks certifications within the Community. Exchange insights, tips, and experiences to help prepare for certification exams and validate your expertise in data engineering, analytics, and machine learning.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

When working with large data sets in Databricks, what are best practices to avoid memory out-of-memo

Suheb
New Contributor II

How can I optimize Databricks to handle large datasets without running into memory or performance problems?

4 REPLIES 4

ShaneCorn
New Contributor III

When working with large datasets in Databricks, it's crucial to follow best practices to avoid memory issues. First, optimize data partitioning to ensure that data is evenly distributed across workers. Use efficient data formats like Parquet for better compression and faster read/write operations. Leverage Sparkโ€™s caching and persisting capabilities selectively to store intermediate results in memory. Additionally, consider using Delta Lake for its ACID transaction support and incremental processing features, which can help manage large-scale data efficiently.

Coffee77
Contributor III

Here are my points as a suummary:

- Try to use managed tables, as recommended now by Databricks. Use Delta Lake over Parquet/CSV for large datasets in order to get more efficient update, delete and merge actions. Besides, you will get further benefits as Transactional ACID, schema evolution, time travel or even further built-in optimizations while platform evolves.

- Try to use liquid clustering on large delta tables. Liquid clustering is a data layout optimization technique that replaces table partitioning and. It simplifies table management and optimizes query performance by automatically organizing data based on clustering keys. More here: https://docs.databricks.com/aws/en/delta/clustering#when-to-use-liquid-clustering

- Combine Spark Caching and Disk Caching (before known as delta cache acceleration) More here: https://www.youtube.com/watch?v=_vWnH4kmF60

- Try to optimize computation: cluster autoscaling, photon for SQL/Delta Workloads, broadcast joins, etc.

- Try to optimize queries and workflows: avoid collect() if possible, filter early and often, avoid full scans, vacuum to remove old versions, optimize delta tables regularly, etc.

- Monitor your queries and workflows, and try to understand execution plans to detect gaps and bottlenecks.

 


Lifelong Learner Cloud & Data Solution Architect | https://www.youtube.com/@CafeConData

JessicaW33
New Contributor III

Great question, Suheb! Working with large datasets in Databricks requires both efficient data handling and optimization of Spark operations to avoid memory issues and maintain performance. Here are some best practices:

1. Optimize Data Storage & Format
Use Columnar Formats: Store data in Parquet or Delta Lake formats instead of CSV or JSON โ€” they are more memory-efficient and support predicate pushdown.

Partitioning: Partition large datasets based on frequently filtered columns (e.g., date, region) to reduce the amount of data read per query.

Z-Ordering (Delta Lake): Helps optimize the storage layout for faster reads in multi-dimensional queries.

2. Efficient Data Processing
Avoid Collecting Large DataFrames: Donโ€™t use collect() on massive datasets; instead, use display() or write to storage.

Use Spark Transformations Wisely: Prefer narrow transformations (like map, filter) over wide transformations (like join, groupBy) when possible to reduce shuffles.

Caching: Cache intermediate results only when necessary to avoid memory overload. Use .persist(StorageLevel.MEMORY_AND_DISK) for very large datasets.

3. Resource Management
Cluster Sizing: Use clusters with adequate memory and cores. For very large datasets, consider autoscaling clusters to adjust resources dynamically.

Adjust Spark Configurations: Parameters like spark.sql.shuffle.partitions, spark.executor.memory, and spark.driver.memory can be tuned based on dataset size.

4. Incremental & Batch Processing
Process data in batches or partitions rather than loading everything at once. For example, read one day or partition at a time.

Delta Lake Streams: For continuous updates, use structured streaming to handle data incrementally.

5. Use Built-in Databricks Tools
Data Skipping & Delta Optimizations: Leverage Databricksโ€™ Delta Lake features for optimized reads and faster queries.

Photon Engine (if available): Improves query performance and reduces memory usage for supported workloads.

6. Monitoring & Profiling
Use Databricks Ganglia Metrics or Spark UI to monitor memory usage, task execution times, and identify bottlenecks.

Profile transformations on small subsets before scaling up.

tarunnagar
New Contributor III

Hey! Great question โ€” Iโ€™ve run into this issue quite a few times while working with large datasets in Databricks, and out-of-memory errors can be a real headache. One of the biggest things that helps is making sure your cluster configuration matches your workload โ€” use enough memory and consider autoscaling for heavy jobs. Also, take advantage of Sparkโ€™s lazy evaluation; donโ€™t trigger actions like .collect() or .count() too early, and definitely avoid using collect() on large datasets since that can overload the driver instantly. Be smart with caching and persisting โ€” only cache data that youโ€™re going to reuse multiple times and remember to unpersist it afterward. For joins, try to use broadcast joins when dealing with smaller lookup tables and optimize your partitions to reduce shuffling. If youโ€™re using Delta tables, thatโ€™s a big plus since Databricks automatically skips irrelevant data and reduces memory usage. Lastly, keep an eye on the Spark UI โ€” itโ€™s super helpful for spotting where memory is getting eaten up or when shuffles are spilling to disk. In short, right-sizing your cluster, optimizing transformations, and managing caching wisely are key to avoiding out-of-memory issues in Databricks.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now