I would need to know a little more about your scenario but it makes me remember a similar case I faced. My approach was to use silver layer to create a delta table with enforced schema, standard field names and types, etc. to perform typical actions such as clean, adjust or even run quality checks before moving aggregated data to the gold layer.
So, what about bronze layer? We decided to only store files in a volume in a catalog placed in bronze layer along with a delta table to register related JSON metadata info along with URI. In our use case, we just process file by file for a lot of clients sending data with very different schemas, so why to have a bronze table with thousand of fields not being processed jointly or a table per client with same structure as files? No sense at all. If we need to recover bronze data, we basically have the registered file and where it is.
Irrespective of deciding put the JSON as a field in a delta table or outside in the way we did, go to siilver layer with standardized schema and files to clean, adjust, run quality checks, etc.
I hope this helps.