cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Sharing Output between different tasks for MLOps pipeline as a Databricks Jobs

rahuja
New Contributor III

Hello Everyone

We are trying to create an ML pipeline on Databricks using the famous Databricks workflows. Currently our pipeline includes having 3 major components: Data Ingestion, Model Training and Model Testing. My question is whether it is possible to share the output of one task to another (i.e. to share data generated by ingestion task to model training task). Currently we are saving the data in the DBFS volumes and reading it from there but I believe that this approach would fail if the dataset is too big. Is there a more elegant way to pass the output from one task to another maybe something similar to what we can do when creating Azure ML pipeline.

#MachineLearning #DataScience #MLOps

4 REPLIES 4

Hkesharwani
Contributor II

Hi,
There is a way to share value from one task to another, but this will only work when the pipeline is executed from workflow.

#Code from which you want to pass the value.
dbutils.jobs.taskValues.set(key='first_notebook_list', value=<value or variable you want to pass>)

#Code for notebook in which you  want to access the previous notebook value.
list_object = dbutils.jobs.taskValues.get(taskKey = "<task_name_from_which_value_to_be fetched>", key = "first_notebook_list", default = 00, debugValue = 0)

 

Harshit Kesharwani
Data engineer at Rsystema

rahuja
New Contributor III

Hi @Retired_mod for your quick reply. I will test it out in our scenario and let you know. Just for confirmation if I have two scripts (e.g. ingest.py and train.py) and in my task named "ingest" I do something like inside ingest.py I run:
dbutils.jobs.taskValues.set(taskKey = "ingest", key = "processed_data", value=data)

then should I pass inside the pipeline for the train.py: {{tasks.ingest.values.processed_data}}?

rahuja
New Contributor III

@Retired_mod I looked into your solution and it seems like that the value you set or get needs to be json serialisable this means I can not pass for e.g. a spark or pandas dataframe from one step to another directly. I will have to serialise and de-serialise it. Is there any step for passing Big Data between various steps of the jobs?

rahuja
New Contributor III

@Retired_mod @Hkesharwani  any updates?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group