@Brahmareddy just to follow up, those properties above worked as expected in some small tests when I changed the directory WITHIN a bucket, but there were a few more details that mattered here when changing the bucket:
1. When changing buckets and not triggering a full reset I needed to add this spark configuration `spark.databricks.cloudFiles.checkSourceChanged false [1] to keep the stream going. Otherwise you end up with a StreamingQueryException: The bucket in the file event `{"backfill":{"bucket":"<bucket-name>","key":"<path-to-key>","size":<size>,"eventTime":<unix-time>}}` is different from expected by the source: `<new-bucket-name>`
2. However, I should have just triggered a full pipeline reset & taken advantage of that "no reset allowed" property for the bronze layer to keep it as is. The `includeExistingFiles=False` autoloader property only takes effect on the FIRST time it's run, so when doing a regular pipeline update with the new destination it went ahead and re-processed all the old data I had copied there as if it were new.
In retrospect it would probably have been simpler to copy the data to a different folder in the S3 bucket and then move it back together with the new data once a full backfill was needed.
Regardless, thanks for the suggestions.
[1] https://kb.databricks.com/en_US/streaming/error-when-trying-to-run-an-auto-loader-job-that-uses-clou...