cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

torch.cuda.OutOfMemoryError: CUDA out of memory

sanjay
Valued Contributor II

Hi,

I am using pynote/whisper large model and trying to process data using spark UDF and getting following error.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 14.76 GiB total capacity; 6.07 GiB already allocated; 120.75 MiB free; 6.25 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Job is configured with 11.3 LTS ML with 1-8 instances of G4dn.4xlarge cluster.

Appreciate if you can provide any help.

Regards,

Sanjay

1 ACCEPTED SOLUTION

Accepted Solutions

Anonymous
Not applicable

@Sanjay Jain​ :

The error message suggests that there is not enough available memory on the GPU to allocate for the PyTorch model. This error can occur if the model is too large to fit into the available memory on the GPU, or if the GPU memory is being used by other processes in addition to the PyTorch model.

You can try to implement below and see what works for you

  • Can you try the brute force way of increasing the instance type with more memory
  • Try decreasing the batch size used for the PyTorch model. A smaller batch size would require less memory on the GPU, and may help avoid the out of memory error. You can experiment with different batch sizes to find the optimal trade-off between model performance and memory usage
  • Try to Set max_split_size_mb to a smaller value to avoid fragmentation
  • There is a DataParallel module in PyTorch, which allows you to distribute the model across multiple GPUs. This would help in running the PyTorch model on multiple GPUs in parallel

I hope all these suggestions help!

View solution in original post

2 REPLIES 2

Anonymous
Not applicable

@Sanjay Jain​ :

The error message suggests that there is not enough available memory on the GPU to allocate for the PyTorch model. This error can occur if the model is too large to fit into the available memory on the GPU, or if the GPU memory is being used by other processes in addition to the PyTorch model.

You can try to implement below and see what works for you

  • Can you try the brute force way of increasing the instance type with more memory
  • Try decreasing the batch size used for the PyTorch model. A smaller batch size would require less memory on the GPU, and may help avoid the out of memory error. You can experiment with different batch sizes to find the optimal trade-off between model performance and memory usage
  • Try to Set max_split_size_mb to a smaller value to avoid fragmentation
  • There is a DataParallel module in PyTorch, which allows you to distribute the model across multiple GPUs. This would help in running the PyTorch model on multiple GPUs in parallel

I hope all these suggestions help!

JMTech18
New Contributor II

Try to run these codes

import torch

torch.cuda.empty_cache()

And make sure to find the optimize batch size otherwise the error can occur again

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group