cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

I have a single node XGBoost model written in Python. How can I scale it with Spark?

User16788317454
New Contributor III
 
1 ACCEPTED SOLUTION

Accepted Solutions

j_weaver
New Contributor III

If you are talking about distributed training of a single XGBoost model, there is no built-in capability in SparkML. SparkML supports gradient boosted trees, but not XGBoost specifically. However, there are 3rd party packages, such as XGBoost4J that you can use. Currently, there is no Python API for it, but you can access it via Scala/Java. See the Databricks docs for a more complete example.

If you want to scale the hyperparameter tuning, you can use HyperOpt with single node XGBoost models in Python, or you can always do distributed inference via a Spark UDF.

View solution in original post

1 REPLY 1

j_weaver
New Contributor III

If you are talking about distributed training of a single XGBoost model, there is no built-in capability in SparkML. SparkML supports gradient boosted trees, but not XGBoost specifically. However, there are 3rd party packages, such as XGBoost4J that you can use. Currently, there is no Python API for it, but you can access it via Scala/Java. See the Databricks docs for a more complete example.

If you want to scale the hyperparameter tuning, you can use HyperOpt with single node XGBoost models in Python, or you can always do distributed inference via a Spark UDF.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.