Hey there, Wilson-Mok!
Multi-cloud implementation with Databricks is a captivating endeavor, is not it? To achieve this, you're on the right track thinking about data synchronization.
One method to harmonize data between clouds is to employ a data replication tool, perhaps leveraging a combination of Delta Lake and external tools like Apache Airflow or dbt (data build tool). Delta Lake's deep clone can indeed be useful for keeping your data in sync. You can periodically replicate data from one cloud's storage to another using this approach.
However, when it comes to managed tables, things can get a bit tricky. You'll need to ensure that the metastore where your managed tables' metadata is stored is accessible from both clouds. Additionally, consider using a mechanism like the Hive metastore for a unified metadata repository. You can also find some smart thoughts here: Cloud Data Migration Challenges: Explore 6 Best Strategies in 2023.
As for reference material, the professional Databricks documentation often includes valuable insights and exceptional practices for multi-cloud setups. You may additionally discover community forums and blogs to analyze from real-world experiences.