How can the shared memory size (/dev/shm) be increased on databricks worker nodes with custom docker images?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-28-2021 02:59 AM
PyTorch uses shared memory to efficiently share tensors between its dataloader workers and its main process. However in a docker container the default size of the shared memory (a tmpfs file system mounted at /dev/shm) is 64MB, which is too small to use to share image tensor batches. This means that when using a custom docker image on a databricks cluster it is not possible to use PyTorch with multiple dataloaders. We can fix this by setting the `--shm-size` or `--ipc=host` args on `docker run` - how can this be set on a databricks cluster?
Note that this doesn't affect the default databricks runtime it looks like that is using the linux default of making half the physical RAM available to /dev/shm - 6.9GB on the Standard_DS3_v2 node I tested.
To reproduce: start a cluster using a custom docker image, run `df -h /dev/shm` in a notebook.
Thanks in advance!
- Labels:
-
Deep learning
-
Memory Size
-
Pytorch
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-19-2022 07:25 AM
Also interested in increasing shared memory for use with ray
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2022 05:17 AM
We spoke to DataBricks about this and they are working on it. At the beginning of the month they said it should be available on Jan 17th but I'm not sure of the status now, we ended up moving this workload off of the platform.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2024 02:55 AM
Hey folks, any follow-up on this, or alternative solution? thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-21-2024 03:11 AM - edited 09-21-2024 03:12 AM
Recently stumbled on this problem. It seems like it basically makes impossible usage of compute with custom docker images for any pytorch-based real life computer vision ML experiments. Which is unfortunate. +1 for requesting followup and possible alternative solutions! Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2025 02:10 AM
bump this one. interested in the topic too.
is there a known solution yet?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Bump again... does anyone have a solution for this?

