02-02-2022 05:08 AM
Hi,
I have a databricks database that has been created in the dbfs root S3 bucket, containing managed tables. I am looking for a way to move/migrate it to a mounted S3 bucket instead, and keep the database name.
Any good ideas on how this can be done?
Thanks
Jan
03-07-2022 08:32 PM
Hi @Jan Ahlbeck
we can use below property to set the default location:
"spark.sql.warehouse.dir": "S3 URL/dbfs path"
Please let me know if this helps.
02-02-2022 06:19 AM
ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION 'new_location';
02-07-2022 07:12 AM
Hi @Jan Ahlbeck , Does @Hubert Dudek 's reply answer your question?
02-07-2022 11:54 PM
Hi @Kaniz Fatma
Partly... Copied data and alter table works fine, but the database location still points to the root location. So when new tables are created (with no location), they are create in root. So a way to change the database location would be nice 🙂
03-07-2022 08:32 PM
Hi @Jan Ahlbeck
we can use below property to set the default location:
"spark.sql.warehouse.dir": "S3 URL/dbfs path"
Please let me know if this helps.
03-11-2022 09:26 AM
Hi @Jan Ahlbeck , Did @DARSHAN BARGAL 's solution work in your case?
Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections.
Click here to register and join today!
Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.