I am unable to deploy a workspace on AWS using Quickstart from my account console.
Short description-
You might receive one of the following common errors users face:
- Wrong credentials
- Elastic IP and VPC limit reached
- Region unavailable
Resolution-
Wrong credentials
Failed to create CreateStorageConfiguraiton and CreateCredentialConfiguration.
Most likely the wrong password has been entered in the Cloudformation template. The browser often tries to autofill your AWS credentials, but we need the Databricks ones.
CreateWorkspace failed
Common reasons:
- Maximum number of Elastic IP addresses has been reached.
- Maximum number of VPC has been reached.
AWS has a limit of 5 VPC per region. Please go to your AWS Console > VPC and check if you have reached the VPC or Elastic IP limit for that region.
If you have, you might want to consolidate your networks, use a different region, use Customer managed VPC feature of Databricks, or ask AWS support to increase your limit.
Region unavailable
The list of supported AWS regions can be found here.
Not able to launch clusters
Error: AWS: unsupported failure
Common reason: AWS 2D availability zone in US West-2 is quite busy. Plus often limited on instance types as well.
Solution: On the Cluster page click Advanced and under Instances select a different availability zone.
Data access via instance profiles
If you have data in S3, you’ll need to configure Databricks in order to be able to query the data, create tables and set up your Lakehouse. Commonly this configuration is via instance profiles that are added to the cluster.
If you want to automatically set up your instance profile and have a default cluster configured in your workspace - select “I have data in S3 that I want to query with Databricks” checkmark.
In this case you’ll see a field for Data bucket in Cloudformation. Please add the name of your S3 bucket that you have the data in.
Multiple data buckets
Quickstart currently supports data access configuration for one S3 bucket. If you want to query all of your S3 buckets with Databricks you can enter “*” in the field.
You can always change this configuration by going to AWS Console > IAM > Roles then select the Role that was created for Databricks and manually overwrite which buckets you want to access. Read more in AWS documentation.
Manually Create a Cross Account Role
Add the role to Databricks like shown below
Then select the manual workspace creation and select the credential and the storage configuration to create your workspace.
The quickstart automates all of this process by leveraging AWS Cloudformation.
Further debug an error
Received response status [FAILED] from custom resource. Message returned: See the details in CloudWatch Log Stream.
Please navigate to Cloudwatch in your AWS Console. Then click Logs > Log groups and search for the stack name of your Cloudformation. By default the name will be “databricks-workspace-stack”. Depending on where the error occurred you might have 2 log groups and potentially up to 2 log streams in a Log group. Please click into each and search for “Error”. You should find an event which you can expand to read more about the details.
This will often give you a good pointer.
If you still cannot solve your issue please post your question with the error message or screenshot here in the Databricks Community.