@zmsoft
Since the JSON is a single-line file, ensure it is being read correctly. Try setting the multiLine option to false (it defaults to false, but explicitly setting it ensures correct handling).
stageDf = (
spark.read.format("json")
.option("multiLine", "false")
.load('https://xxxx.blob.core.xxxx.xx/insights-activity-logs/xxxx/PT1H.json')
)
If you are still encountering the issue after applying the above settings, then...
Check If there are schema mismatches, set the overwriteSchema option to allow the schema to be updated:
#Inspect the schema of the loaded DataFrame to ensure it is correct
stageDf.printSchema()
stageDf.show(truncate=False)
stageDf.write.format("delta").mode("overwrite").option("overwriteSchema", "true").saveAsTable(tempTableName)