04-13-2022 01:24 PM
Hi, Im trying to use the databricks platform to do the pytorch distributed training, but I didnt find any info about this. What I expected is using multiple clusters to run a common job using pytorch distributed data parallel (DDP) with the code below:
On device 1: %sh python -m torch.distributed.launch --nproc_per_node=4 --nnodes=2 --node_rank=0 --master_addr="127.0.0.1" --master_port=29500 train_something.py
On device 2: %sh python -m torch.distributed.launch --nproc_per_node=4 --nnodes=2 --node_rank=1 --master_addr="127.0.0.1" --master_port=29500 train_something.py
This is definitely supported by other computation platform like slurm, but it failed in the databricks. Could you let me know whether you do support this? or you will consider to add this feature for the later developments. Thank you in advance!
04-14-2022 07:12 AM
@Shaomu Tan , can you check sparktorch?
The parallel processing on Databricks clusters is mainly based on Apache Spark™. So to use the parallel processing, the library in question (PyTorch) has to be written for Spark. spark torch is an attempt to do just that.
You can also run Apache Ray on Databricks or Dask (I thought that was possible too), so bypassing Apache spark
04-14-2022 07:12 AM
@Shaomu Tan , can you check sparktorch?
The parallel processing on Databricks clusters is mainly based on Apache Spark™. So to use the parallel processing, the library in question (PyTorch) has to be written for Spark. spark torch is an attempt to do just that.
You can also run Apache Ray on Databricks or Dask (I thought that was possible too), so bypassing Apache spark
02-19-2023 08:15 AM
With Databricks MLR, HorovodRunner is provided which supports distributed training and inference with PyTorch. Here's an example notebook for your reference: PyTorchDistributedDeepLearningTraining - Databricks.
Friday
Hey, so we even can't use the TorchDistributor and Distributed Data Parallel to achieve the distributed training thing in my code, and `TorchDistributor` is a spark written distribution library, coz with this setup I am not able to get the the required distributed training that expected .. second worker node have no ups in the metrics side. .. giving this reply more path, ^^ essentially how should we do the distributed training in a databricks multi node setup which have 1 driver with 1 worker. @-werners- @axb0 @Smu_Tan , should we move out of pytorch fully for this purpose or use a complete spark code to achieve this, or there's any dependancy which can provide help with this approach.
Friday
Since you replied on a rather old topic: TorchDistributor enables pytorch on spark in distributed mode.
But a cluster with only 1 worked and 1 driver will not run in distributed mode.
The driver does not execute spark tasks, it handles spark overhead and f.e. python code outside of spark.
If you want to run in distributed mode you should have at least 2 workers (and always a driver).
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group