cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How can the shared memory size (/dev/shm) be increased on databricks worker nodes with custom docker images?

Alex_Persin
New Contributor II

PyTorch uses shared memory to efficiently share tensors between its dataloader workers and its main process. However in a docker container the default size of the shared memory (a tmpfs file system mounted at /dev/shm) is 64MB, which is too small to use to share image tensor batches. This means that when using a custom docker image on a databricks cluster it is not possible to use PyTorch with multiple dataloaders. We can fix this by setting the `--shm-size` or `--ipc=host` args on `docker run` - how can this be set on a databricks cluster?

Note that this doesn't affect the default databricks runtime it looks like that is using the linux default of making half the physical RAM available to /dev/shm - 6.9GB on the Standard_DS3_v2 node I tested.

To reproduce: start a cluster using a custom docker image, run `df -h /dev/shm` in a notebook.

Thanks in advance!

2 REPLIES 2

mstuder
New Contributor II

Also interested in increasing shared memory for use with ray

Alex_Persin
New Contributor II

We spoke to DataBricks about this and they are working on it. At the beginning of the month they said it should be available on Jan 17th but I'm not sure of the status now, we ended up moving this workload off of the platform.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!