- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-09-2023 07:25 AM
Hi,
I am using pynote/whisper large model and trying to process data using spark UDF and getting following error.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 14.76 GiB total capacity; 6.07 GiB already allocated; 120.75 MiB free; 6.25 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Job is configured with 11.3 LTS ML with 1-8 instances of G4dn.4xlarge cluster.
Appreciate if you can provide any help.
Regards,
Sanjay
Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2023 06:26 PM
@Sanjay Jain :
The error message suggests that there is not enough available memory on the GPU to allocate for the PyTorch model. This error can occur if the model is too large to fit into the available memory on the GPU, or if the GPU memory is being used by other processes in addition to the PyTorch model.
You can try to implement below and see what works for you
- Can you try the brute force way of increasing the instance type with more memory
- Try decreasing the batch size used for the PyTorch model. A smaller batch size would require less memory on the GPU, and may help avoid the out of memory error. You can experiment with different batch sizes to find the optimal trade-off between model performance and memory usage
- Try to Set max_split_size_mb to a smaller value to avoid fragmentation
- There is a DataParallel module in PyTorch, which allows you to distribute the model across multiple GPUs. This would help in running the PyTorch model on multiple GPUs in parallel
I hope all these suggestions help!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2023 06:26 PM
@Sanjay Jain :
The error message suggests that there is not enough available memory on the GPU to allocate for the PyTorch model. This error can occur if the model is too large to fit into the available memory on the GPU, or if the GPU memory is being used by other processes in addition to the PyTorch model.
You can try to implement below and see what works for you
- Can you try the brute force way of increasing the instance type with more memory
- Try decreasing the batch size used for the PyTorch model. A smaller batch size would require less memory on the GPU, and may help avoid the out of memory error. You can experiment with different batch sizes to find the optimal trade-off between model performance and memory usage
- Try to Set max_split_size_mb to a smaller value to avoid fragmentation
- There is a DataParallel module in PyTorch, which allows you to distribute the model across multiple GPUs. This would help in running the PyTorch model on multiple GPUs in parallel
I hope all these suggestions help!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2024 04:34 AM
Try to run these codes
import torch
torch.cuda.empty_cache()
And make sure to find the optimize batch size otherwise the error can occur again

