Hello, I have in a S3 data lake, in it: a structure of files that are of different formats : json, csv, text, binary, ...
Would you consider this as my bronze layer ? or a "pre-bronze" layer since it can't be processed directly by spark (because of different file format ?)
How am I supposed to query and trasform that data with databricks since it's from different format ?
Should I instead firstly transform data to put into a delta table with some columns like :
- metadata (map column)
- content (binary column)
In this case, would Autoloader be relevant ?