- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2022 05:08 AM
Hi,
I have a databricks database that has been created in the dbfs root S3 bucket, containing managed tables. I am looking for a way to move/migrate it to a mounted S3 bucket instead, and keep the database name.
Any good ideas on how this can be done?
Thanks
Jan
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2022 08:32 PM
Hi @Jan Ahlbeck
we can use below property to set the default location:
"spark.sql.warehouse.dir": "S3 URL/dbfs path"
Please let me know if this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2022 06:19 AM
- Just copy all data and than alter table location:
ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION 'new_location';
- Alternatively create new table in new location and than use INSERT INTO SELECT to move data
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-07-2022 11:54 PM
Hi @Kaniz Fatma
Partly... Copied data and alter table works fine, but the database location still points to the root location. So when new tables are created (with no location), they are create in root. So a way to change the database location would be nice 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2022 08:32 PM
Hi @Jan Ahlbeck
we can use below property to set the default location:
"spark.sql.warehouse.dir": "S3 URL/dbfs path"
Please let me know if this helps.