I want to run auto loader on some very large json files. I don't actually care about the data inside the files, just the file paths of the blobs. If I do something like
```
spark.readStream
.format("cloudFiles")
.option("cloudFiles.format", "json")
.option("cloudFiles.schemaLocation", source_operations_checkpoint_path)
.load(source_operations_path)
.select("_metadata")
```
will Databricks know not to reading all the files or will it read them in anyway, then discard?