cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Does Databricks supports the Pytorch Distributed Training for multiple devices?

Smu_Tan
New Contributor

Hi, Im trying to use the databricks platform to do the pytorch distributed training, but I didnt find any info about this. What I expected is using multiple clusters to run a common job using pytorch distributed data parallel (DDP) with the code below:

On device 1: %sh python -m torch.distributed.launch --nproc_per_node=4 --nnodes=2 --node_rank=0 --master_addr="127.0.0.1" --master_port=29500 train_something.py

On device 2: %sh python -m torch.distributed.launch --nproc_per_node=4 --nnodes=2 --node_rank=1 --master_addr="127.0.0.1" --master_port=29500 train_something.py

This is definitely supported by other computation platform like slurm, but it failed in the databricks. Could you let me know whether you do support this? or you will consider to add this feature for the later developments. Thank you in advance!

1 ACCEPTED SOLUTION

Accepted Solutions

-werners-
Esteemed Contributor III

@Shaomu Tanโ€‹ , can you check sparktorch?

The parallel processing on Databricks clusters is mainly based on Apache Sparkโ„ข. So to use the parallel processing, the library in question (PyTorch) has to be written for Spark. spark torch is an attempt to do just that.

You can also run Apache Ray on Databricks or Dask (I thought that was possible too), so bypassing Apache spark

View solution in original post

6 REPLIES 6

-werners-
Esteemed Contributor III

@Shaomu Tanโ€‹ , can you check sparktorch?

The parallel processing on Databricks clusters is mainly based on Apache Sparkโ„ข. So to use the parallel processing, the library in question (PyTorch) has to be written for Spark. spark torch is an attempt to do just that.

You can also run Apache Ray on Databricks or Dask (I thought that was possible too), so bypassing Apache spark

axb0
Databricks Employee
Databricks Employee

With Databricks MLR, HorovodRunner is provided which supports distributed training and inference with PyTorch. Here's an example notebook for your reference: PyTorchDistributedDeepLearningTraining - Databricks.

adarsh8304
New Contributor II

Hey, so we even can't use the TorchDistributor and Distributed Data Parallel to achieve the distributed training thing in my code, and `TorchDistributor` is a spark written distribution library, coz with this setup I am not able to get the the required distributed training that expected .. second worker node have no ups in the metrics side. .. giving this reply more path, ^^ essentially how should we do the distributed training in a databricks multi node setup which have 1 driver with 1 worker. @-werners- @axb0 @Smu_Tan , should we move out of pytorch fully for this purpose or use a complete spark code to achieve this, or there's any dependancy which can provide help with this approach.

-werners-
Esteemed Contributor III

Since you replied on a rather old topic: TorchDistributor enables pytorch on spark in distributed mode.
But a cluster with only 1 worked and 1 driver will not run in distributed mode.
The driver does not execute spark tasks, it handles spark overhead and f.e. python code outside of spark.
If you want to run in distributed mode you should have at least 2 workers (and always a driver).

adarsh8304
New Contributor II

Hey @-werners- thanks for answering first, why then the metrics of cpu, mem utilisation we are getting in driver only and worker seems still, with less utilisation of any training, with torch distributor I think atleast that one worker should be in use, right ?

one more thing, are the databricks driver machine designed in such a way that makes it less optimal and performant for the model training and inference tasks. as databricks implies that the code should be in apache spark only ( keeping pytorch and pandas out of execution line). 

-werners-
Esteemed Contributor III

If only the driver is active, this probably means you are not using Spark.  When running pure python,... code, the driver will execute that.
If Spark is active, workers receive their tasks from the driver.  Generally the driver is not that active, the workers do all the work.The driver machine is not designed in any way.  You can define yourself what kind of machine you use as a driver.
You can even run in single node mode, so you only have a driver (which also acts as a worker).

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group