DAB's are useful but not sufficient. They work well for re-creating control-plane assets such as jobs, notebooks, DLT/Lakeflow pipelines, and model serving endpoints in a target workspace, even across clouds, by using environment-specific targets and variables.
1) DAB does not migrate Unity Catalog metastores, data, or tables. Since UC metastores are cloud-scoped, you must create a new GCP metastore, recreate catalogs/schemas/permissions, and migrate data separately (SYNC for external tables, CTAS/DEEP CLONE for managed tables).
2) For classic compute, there is no 1:1 mapping between Azure and GCP VM types. DAB can reference or define compute, but you must remap node types and cloud-specific settings for GCP. In practice, teams export clusters and jobs using the Databricks Terraform Resource Exporter, update node types, storage paths, secrets, and cloud attributes, and then apply them to the GCP workspace.
3)Like I said above, use the Terraform Resource Exporter to lift-and-shift existing workspace assets, migrate UC metadata and data explicitly, and then adopt DAB for ongoing CI/CD to manage jobs, pipelines, and serving endpoints going forward. In short, think of Terraform for the initial migration, and DAB for long-term, clean, multi-environment deployments.