Hello Experts,
I am new to Databricks. Building data pipelines, I have both batch and streaming data.
Should I use Dataframes API to read csv files then convert to parquet format then do the transformation?
or
write to table using CSV then use Spark SQL to do transformation?.
Appreciate pros and cons and which one is better
Thank you
Rathinam