If the read stream definition has something similar to
val df = spark
.read
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
.option("subscribePattern", "topic.*")
.option("startingOffsets", "earliest")
resetting the checkpoint would attempt to read from the earliest record inside the topic. Now, whether this would result in the full reload of the table would be a function of retention.ms of the topic. If there are are records that have already been expired from kafka, they won't be reprocessed.