cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How can the shared memory size (/dev/shm) be increased on databricks worker nodes with custom docker images?

Alex_Persin
New Contributor II

PyTorch uses shared memory to efficiently share tensors between its dataloader workers and its main process. However in a docker container the default size of the shared memory (a tmpfs file system mounted at /dev/shm) is 64MB, which is too small to use to share image tensor batches. This means that when using a custom docker image on a databricks cluster it is not possible to use PyTorch with multiple dataloaders. We can fix this by setting the `--shm-size` or `--ipc=host` args on `docker run` - how can this be set on a databricks cluster?

Note that this doesn't affect the default databricks runtime it looks like that is using the linux default of making half the physical RAM available to /dev/shm - 6.9GB on the Standard_DS3_v2 node I tested.

To reproduce: start a cluster using a custom docker image, run `df -h /dev/shm` in a notebook.

Thanks in advance!

2 REPLIES 2

mstuder
New Contributor II

Also interested in increasing shared memory for use with ray

Alex_Persin
New Contributor II

We spoke to DataBricks about this and they are working on it. At the beginning of the month they said it should be available on Jan 17th but I'm not sure of the status now, we ended up moving this workload off of the platform.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.