- 3810 Views
- 3 replies
- 1 kudos
I have json files falling in our blob with two fields, 1. offset(integer), 2. value(base64).This value column is json with unicode. so they sent it as base64. Challenge is this json is very large with 100+ fields. so we cannot define the schema. We c...
- 3810 Views
- 3 replies
- 1 kudos
Latest Reply
Hi @MerelyPerfect Per Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you....
2 More Replies
- 11999 Views
- 2 replies
- 6 kudos
When there is a schema change while reading and writing to a stream, will the schema changes be automatically handled by sparkor do we need to include the option(mergeschema=True)?Eg:df.writeStream .option("mergeSchema", "true") .format("delta") .out...
- 11999 Views
- 2 replies
- 6 kudos
Latest Reply
mergeSchema doesn't support all operations. In some cases .option("overwriteSchema", "true") is needed. MergeSchema doesn't support:Dropping a columnChanging an existing column's data type (in place)Renaming column names that differ only by case (e.g...
1 More Replies
- 9711 Views
- 7 replies
- 2 kudos
Hi AllI am loading some data using auto loader but am having trouble with Schema evolution.A new column has been added to the data I am loading and I am getting the following error:StreamingQueryException: Encountered unknown field(s) during parsing:...
- 9711 Views
- 7 replies
- 2 kudos
Latest Reply
I agree that hints are the way to go if you have the schema available but the whole point of schema evolution is that you might not always know the schema in advance.I received a similar error with a similar streaming query configuration. The issue w...
6 More Replies