Hi Databricks Gurus !
I am trying to run a very simple snippet :
data_emp=[["1","sarvan","1"],["2","John","2"],["3","Jose","1"]]
emp_columns=["EmpId","Name","Dept"]
df=spark.createDataFrame(data=data_emp, schema=emp_columns)
df.show()
--------
Based on a general understanding data bricks should create at the most 2 Jobs
One to read the data(this works for files like that, don't know if it would apply here)
One for show()
But it somehow creating 3 jobs
Can someone explain why is the behavior ?