โ02-02-2022 05:08 AM
Hi,
I have a databricks database that has been created in the dbfs root S3 bucket, containing managed tables. I am looking for a way to move/migrate it to a mounted S3 bucket instead, and keep the database name.
Any good ideas on how this can be done?
Thanks
Jan
โ03-07-2022 08:32 PM
Hi @Jan Ahlbeckโ
we can use below property to set the default location:
"spark.sql.warehouse.dir": "S3 URL/dbfs path"
Please let me know if this helps.
โ02-02-2022 06:19 AM
ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION 'new_location';
โ02-07-2022 07:12 AM
Hi @Jan Ahlbeckโ , Does @Hubert Dudekโ 's reply answer your question?
โ02-07-2022 11:54 PM
Hi @Kaniz Fatmaโ
Partly... Copied data and alter table works fine, but the database location still points to the root location. So when new tables are created (with no location), they are create in root. So a way to change the database location would be nice ๐
โ03-07-2022 08:32 PM
Hi @Jan Ahlbeckโ
we can use below property to set the default location:
"spark.sql.warehouse.dir": "S3 URL/dbfs path"
Please let me know if this helps.
โ03-11-2022 09:26 AM
Hi @Jan Ahlbeckโ , Did @DARSHAN BARGALโ 's solution work in your case?
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group