Does the pre-partitioning of a Delta Table has an influence on the number of "default" Partition of a Dataframe when readying the data?
Put differently, using spark structured streaming, when readying from a delta table, is the number of Dataframe partition created derived from the partition of the delta table somehow. The analogy here would be what happen when readying from a kafka source, there is a 121 mapping between the topic partition and the DataFrame partition.