cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Why is pytorch cuda total memory not aligned with the memory size of GPU cluster I created?

zzy
New Contributor III

No matter GPU cluster of which size I create, cuda total capacity is always ~16 Gb. Does anyone know what is the issue?

The code I use to get the total capacity:

torch.cuda.get_device_properties(0).total_memory

3 REPLIES 3

Debayan
Esteemed Contributor III
Esteemed Contributor III

Hi, Could you please let us know which DBR has been installed? Also, please let us know if you have gone the supported instance types. Reference: https://docs.databricks.com/clusters/gpu.html.

zzy
New Contributor III

Hi, the DBR version is 11.3 LTS ML. The instance types I created is g4dn.

Anonymous
Not applicable

Hi @Simon Zhang​ 

Hope everything is going great.

Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you. 

Cheers!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.