- 16413 Views
- 10 replies
- 5 kudos
- 16413 Views
- 10 replies
- 5 kudos
Latest Reply
Can you not use a No Isolation Shared cluster with Table access controls enabled on workspace level?
9 More Replies
by
Jan_A
• New Contributor III
- 5347 Views
- 3 replies
- 5 kudos
Hi,I have a databricks database that has been created in the dbfs root S3 bucket, containing managed tables. I am looking for a way to move/migrate it to a mounted S3 bucket instead, and keep the database name.Any good ideas on how this can be done?T...
- 5347 Views
- 3 replies
- 5 kudos
Latest Reply
Hi @Jan Ahlbeck we can use below property to set the default location:"spark.sql.warehouse.dir": "S3 URL/dbfs path"Please let me know if this helps.
2 More Replies
- 1226 Views
- 1 replies
- 0 kudos
IF I installed the root Bucket I see a root bucket is created with workspace, Does this bucket resided in Customer account or Databricks Account. How can I Access the bucket and can i see this bucket directly in s3 or ADLS
- 1226 Views
- 1 replies
- 0 kudos
Latest Reply
Didin't get the reference about installing bucket ? did you mean configured a workspace with root bucket. If so, you'd have probably gathered that root storage for a workspace resides in customer's account