Cluster logs stopped getting written to S3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Friday
We have two Databricks Workspaces and since a couple of days ago, cluster logs are not getting persisted to S3, in both workspaces. Driver logs are available in Databricks UI only when the job is active. Haven't seen any errors in the job logs related to this.
Appreciate any help
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Saturday
Hi,
How are you doing today?, As per my understanding, It looks like something might have changed with how Databricks is saving logs to S3. Have you checked if the permissions for S3 are still correct? Sometimes, IAM roles or bucket settings change without notice. Also, double-check the cluster logging settings in Databricks—maybe something was updated. Since this is happening in both workspaces, it could be a bigger issue, like a recent Databricks update or an S3 setting. You might try creating a test cluster with fresh logging settings to see if it works. Let me know what you find!
Regards,
Brahma
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Saturday
Thanks for your suggention
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Monday
Hello,
Facing same issue in both our Workspaces, our Cluster logs suddenly stopped being delivered to S3 on 12th of March. There were no changes on Cluster settings nor IAM side, also all IAM Permissions should be in place according to Databricks Official Documentation.
For testing, SSH-ing into one of the Databricks Workers in AWS and uploading a file to S3 was working fine.
Any hints would be highly appreciated.

