Hello,
We are using delta live tables to ingest data from multiple business groups, each with different input file formats and parsing requirements. The input files are ingested from azure blob storage. Right now, we are only servicing three business groups, and for PoC purposes have three different parsing scripts that are tailored to the specific requirements of each group. In the future, we could be servicing hundreds of business groups and do not want to maintain separate parsing scripts for each business group that opts in. Does databricks offer a solution for generic file processing? Additionally, over time the input file for any particular business group may change and their business processes change. Can databricks adapt automatically to changing file formats?