Issues with Common Data Model as Source - different column size for blobs
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2024 01:17 AM
I have a Dataverse Synapse link set up to extract data into ADLS gen2. I am trying to connect ADLS gen2 as the data source to read the data files in Databricks. I have CDC enabled for CDM Data with the partition of Year and Month.
So, for example, if I am receiving 10 files for the table Product, the problem is the column size varies for the different csv received and there is only a single model.json for them. This is causing issues when reading the data with respect to column variation while reading schema.
- Labels:
-
Spark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2024 02:33 AM - edited 06-12-2024 02:35 AM
This is a thoughtful consideration, but have you considered using
.option("mergeSchema", "true")
when writing?
Do keep in mind that this will affect the target table and possible downstream consumers. Ideally you want to have strict a schema contract with your data suppliers to avoid these issues. You can also consider dynamically creating the model.json based on the file headers you're receiving.

