The scenario is "A substaincial amount of data needs to be moved from a legacy Databricks that has Managed Tables, to a new E2 Databrick. The new bucket will be a dedicated Datalake rather than the Workspace Bucket so they will be External Tables."
Using AWS I was able to move a tables '.db' folder containing the Parquet and Manifest from the old bucket to the new bucket. Because this is S3 to S3, it was fast.
I was then able to mount the previously Managed Table, as an External Table by using the follow.
%sql
CREATE TABLE external_table USING DELTA OPTIONS (path '/Datalake/migrated_table');
This seems to be a very fast way to move data between the old and new Databrick platforms. However I am keen to know if anyone has also done this, and could share if there are implications I should be aware of?