pradeep_singh
Contributor

You want to be able to reuse the checkpoint in the new cloud instance since it retains a lot of information that’s specific to the source cloud (path structure/URIs, provider‑specific file identifiers and listing metadata, schema inference history, and streaming offsets), which means it’s only reusable if the destination mirrors the same storage layout and identifiers exactly ,which is not the case here . 
The right approach would be to migrate the data (and Delta transaction logs) to the new cloud, re‑register tables, and start the pipeline with a fresh checkpoint at cutover—optionally rebuilding silver/gold from bronze—while filtering post‑cutover arrivals to avoid reprocessing

Thank You
Pradeep Singh - https://www.linkedin.com/in/dbxdev