In databricks docs it says
"If you use maxBytesPerTrigger in conjunction with maxFilesPerTrigger, the micro-batch processes data until either the maxFilesPerTrigger or maxBytesPerTrigger limit is reached."
But based on the source code this is not true.
val maxBytesPerTrigger: Option[Long] = parameters.get("maxBytesPerTrigger").map { str =>
Try(str.toLong).toOption.filter(_ > 0).map(op =>
if (maxFilesPerTrigger.nonEmpty) {
throw new IllegalArgumentException(
"Options 'maxFilesPerTrigger' and 'maxBytesPerTrigger' " +
"can't be both set at the same time")
} else op
).getOrElse {
throw new IllegalArgumentException(
s"Invalid value '$str' for option 'maxBytesPerTrigger', must be a positive integer")
}
}
Am i missing something here?