So i have this nested data with more than 200+columns and i have extracted this data into json file
when i use the below code to read the json files, if in data there are few columns which have no value at all it doest inclued those columns in schema .
from pyspark.sql import SparkSession
spark = (
SparkSession.builder.master("local[1]")
.config("spark.sql.jsonGenerator.ignoreNullFields", "false")
).getOrCreate()
# Create a SparkSession
df=spark.read.option("multiline","true").option("inferschema","true").json("file.json")
df.printSchema()
i can create schema and read it that will solve the issue i guess
but wanted to know if there is any alternate approach to this
Also can anyone help me with how to write this nested data to streaming table in bronze layer