cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

rusty
by New Contributor II
  • 4280 Views
  • 2 replies
  • 2 kudos

Resolved! "Photon ran out of memory" while when trying to get the unique Id from sql query

I am trying to get all unique id from sql query and I always run out of memoryselect concat_ws(';',view.MATNR,view.WERKS) from hive_metastore.dqaas.temp_view as view join hive_metastore.dqaas.t_dqaas_marc as marc on marc.MATNR = view.MATNR where view...

  • 4280 Views
  • 2 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Anil Kumar Chauhan​ We haven't heard from you since the last response from @Werner Stinckens​  . Kindly share the information with us, and in return, we will provide you with the necessary solution.Thanks and Regards

  • 2 kudos
1 More Replies
DataBRObin
by New Contributor III
  • 3666 Views
  • 6 replies
  • 1 kudos

FFmpeg frame extraction explodes memory, how to mitigate?

For a computer vision project, my raw data consists of encrypted videos (60fps) stored in Azure Blob Storage. In order to have the data usable for model training, I need to do some preprocessing and for that I need the video split into individual fra...

  • 3666 Views
  • 6 replies
  • 1 kudos
Latest Reply
DataBRObin
New Contributor III
  • 1 kudos

In the end, I decided to change around the workflow so it is as efficient as I could imagine it:Extract frames of video files in a containerized application somewhere running ffmpeg and storing the resulting frames in a parquet file in blob storage (...

  • 1 kudos
5 More Replies
sanjay
by Valued Contributor II
  • 30743 Views
  • 1 replies
  • 1 kudos

Resolved! torch.cuda.OutOfMemoryError: CUDA out of memory

Hi,I am using pynote/whisper large model and trying to process data using spark UDF and getting following error.torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 14.76 GiB total capacity; 6.07 GiB already allocated...

  • 30743 Views
  • 1 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

@Sanjay Jain​ : The error message suggests that there is not enough available memory on the GPU to allocate for the PyTorch model. This error can occur if the model is too large to fit into the available memory on the GPU, or if the GPU memory is bei...

  • 1 kudos
Labels