Hi @csmcpherson ,
This is currently not supported, but databricks team is working on that idea according to below thread:
Solved: File information is not passed to trigger job on f... - Databricks Community - 39266
As a workaround, if you use autoloader, you can use file _metadata column.
File metadata column - Azure Databricks | Microsoft Learn
spark.readStream \
.format("cloudFiles") \
.option("cloudFiles.format", "csv") \
.schema(schema) \
.load("abfss://my-bucket/csvData") \
.selectExpr("*", "_metadata as source_metadata") \
.writeStream \
.format("delta") \
.option("checkpointLocation", checkpointLocation) \
.start(targetTable)