Hello,
We are new on Databricks and we would like to know if our working method are good.
Currently, we are working like this :
spark.sql("CREATE TABLE Temp (SELECT avg(***), sum(***) FROM aaa LEFT JOIN bbb WHERE *** >= ***)")
With this method, are we using the full capacity of Databricks, like "Map & Reduce" ?
Thanks.