Databricks wants to avoid vendor lock-in, so in theory it is cloud platform agnostic.
However, this does not just work. You have to think about all the configuration you did on your databricks workspace and do the same configuration on the other cloud platform, well not literally the same but conceptually the same (f.e. ADLS vs S3, firewalls, git, jars, ...).
The code itself will work. I have no knowledge of certain possibilities not being available on a cloud provider, preview functionalities not included!
The cool part is that DBFS, on which databricks works is a semantic layer over your physical storage.
So as long as your DBFS paths are the same over the providers you will be ok.
But any config should be taken into account.