โ03-16-2023 02:12 PM
When loading an xgboost model from mlflow following the provided instructions in Databricks hosted MLflow the input sizes I am showing on the job are over 1 TB. Is anyone else using an xgboost.spark model and noticing the same behavior?
Below are some screenshots showing the input size. The job has been running over 15 minutes just to load the model from MLflow.
โ03-16-2023 02:19 PM
Getting rid of the call to the full dbfs artifact path seemed to fix the issue for me.
โ03-16-2023 02:19 PM
โ04-26-2024 02:07 AM
Thank you very much @Data_Cowboy !!! I had the same issue. I even had 14 TiB ๐
Databricks should really fix this
โ04-26-2024 05:38 AM
@dbx-user7354 Glad to hear this solution worked out for you. Makes me feel good that I came back and answered my own post ๐
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now