i am trying to create 2 streaming tables in one DLT pipleine , both read json data from different locations and both have different schema , the pipeline executes but no data is inserted in both the tables.
where as when i try to run each table individually they execute perfectly
is it because DLT cannot process two different streaming table at once.?
DF = spark.readStream.format("json") \
.schema(schema) \
.option("header", True) \
.option("nullValue", "") \
.load(source_path + "/*.json")