If you are talking about distributed training of a single XGBoost model, there is no built-in capability in SparkML. SparkML supports gradient boosted trees, but not XGBoost specifically. However, there are 3rd party packages, such as XGBoost4J that you can use. Currently, there is no Python API for it, but you can access it via Scala/Java. See the Databricks docs for a more complete example.
If you want to scale the hyperparameter tuning, you can use HyperOpt with single node XGBoost models in Python, or you can always do distributed inference via a Spark UDF.