Summary of the problem
When mounting an S3 bucket via Terraform the creation process is frequently timing out (running beyond 10 minutes). When I check the Log4j logs in the GP cluster I see the following error message repeated:
```
22/07/26 05:54:43 ERROR DatabricksS3LoggingUtils$:V3: S3 request failed with com.amazonaws.SdkClientException: Unable to execute HTTP request: Remote host terminated the handshake; Request ID: null, Extended Request ID: null, Cloud Provider: AWS, Instance ID: i-0af1e2435799123e0
com.amazonaws.SdkClientException: Unable to execute HTTP request: Remote host terminated the handshake
...
Caused by: javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake
...
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:481)
at sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:470)
at sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:160)
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)
at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1418)
... 89 more
```
It seems like the `databricks_mount` is a flaky and works sometimes and not others.
What I've tried
- I've ensured that the instance profile role attached to the general purpose cluster nodes have the recommended policy with s3:ListBucket, s3:PutObjectAcl, s3:PutObject, s3:GetObject and s3:DeleteObject permissions.
- I've also completely removed the bucket policy to rule out that it's the bucket policy that's blocking the access
Please let me know if you need any further details.