cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks on AWS - Changes to your Unity Catalog storage credentials

abhishekdas
Visitor

Hi

 
Context: 
On June 30, 2023, AWS updated its IAM role trust policy, which requires updating Unity Catalog storage credentials. Databricks previously sent an email communication to customers in March 2023 on this topic and updated the documentation and terraform template to reflect the required changes. To allow customers time to adjust, we implemented a temporary workaround to allow storage credentials with previous role policy settings to continue working to give customers time to make the required changes. As per our previous email, Unity Catalog storage credentials must be configured with a self-assuming IAM role.
Action Required
You are receiving this email because you used a storage credential configured with an IAM role that does not allow self-assuming, and we ask that you update it before January 20, 2025 at which point your existing credential will stop working.
To update your non-self-assuming storage credentials, please follow guidance in the Unity Catalog documentation or the Unity Catalog terraform templates. Please ensure that they are self-assume-capable (please see “Step 3: Update the IAM role policy”). If you are unsure how to update your IAM policies to self-assume, please check with your Account Representative.
 
We need help here since it looks like the course of action here is to update the trust relationship policy of the IAM role <company>-metadata-databricks-access-role which was created by the cloudformation stack provided by Databricks itself. We had reached out to our AWS support representative and below is the response - please advise how exactly to move forward. 
 
To begin with, regarding your query of whether to change the CloudFormation stack and re-apply, or change the policy in the IAM console, based on my analysis, the recommended approach would be to modify the CloudFormation stack and re-apply it. This method ensures that your infrastructure remains in sync with your source code and maintains the integrity. However, given that the CloudFormation stack was automatically provided by Databricks, please contact Databricks support further to inquire about the proper procedure for updating the IAM policies within their provided CloudFormation stack and If Databricks allows modifications to their stack, request guidance on how to safely implement these changes whereas If Databricks does not recommend modifying their stack, then changing the policy directly in the IAM console may be your best option.
Having said that, given that the managing resources like cloudformation stacks are provided by a third-party service like Databricks, I would strongly recommend reaching out to their support team for guidance on best practices for modifying the automatically provided CloudFormation stack as third-party assistance is out of AWS support scope. They should be able to provide you with the most up-to-date and safe methods for making necessary changes.

 
Thanks
0 REPLIES 0

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group