A delta live table pipeline reads a delta table on databricks. Is it possible to limit the size of microbatch during data transformation?
I am thinking about a solution used by spark structured streaming that enables control of batch size using:
.option("maxBytesPerTrigger", 104857600)
.option("maxFilesPerTrigger", 100)
Is any similar option applicable?