Showing results for 
Search instead for 
Did you mean: 
New Contributor III
since ‎10-14-2022

User Stats

  • 4 Posts
  • 0 Solutions
  • 3 Kudos given
  • 3 Kudos received

User Activity

No matter GPU cluster of which size I create, cuda total capacity is always ~16 Gb. Does anyone know what is the issue?The code I use to get the total capacity:torch.cuda.get_device_properties(0).total_memory
I have a dataset about 5 million rows with 14 features and a binary target. I decided to train a pyspark random forest classifier on Databricks. The CPU cluster I created contains 2 c4.8xlarge workers (60GB, 36core) and 1 r4.xlarge (31GB, 4core) driv...
Kudos from
Kudos given to