cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Databricks Logs

APJESK
New Contributor III

Iโ€™m trying to understand the different types of logs available in Databricks and how to access and interpret them. Could anyone please guide me on:

  • What types of logs are available in Databricks?

  • Where can I find these logs?

  • How can I use these logs to troubleshoot jobs or understand activity?

Any examples, resources, or best practices would be greatly appreciated.

Thank you in advance for your help!

2 REPLIES 2

SP_6721
Contributor III

Hi @APJESK ,

From what Iโ€™m aware of, here are some common types of logs:

  • Audit Logs: Track user activity, data access, and admin actions.
    Access: Available in cloud storage if audit logging is enabled, or via system tables
    Use: Mainly for security and compliance reviews
  • Cluster Logs: Include driver, executor, and init script logs related to Spark jobs and cluster activity.
    Access: Found in DBFS, cloud storage, or in paths defined during cluster setup
    Use: Useful for debugging Spark jobs or identifying performance issues
  • System Table Logs: Provide structured logs for jobs, clusters, billing, and audit events.
    Access: Can be queried using Databricks SQL, notebooks, or APIs
    Use: Helpful for usage tracking, cost monitoring, and performance insights

You can refer to these docs for more details:
https://docs.databricks.com/aws/en/admin/account-settings/audit-logs
https://docs.databricks.com/aws/en/admin/system-tables/audit-logs
https://www.databricks.com/blog/2022/05/02/monitoring-your-databricks-lakehouse-platform-with-audit-...

MadhuB
Valued Contributor

Hi @APJESK,

In addition, job logs are very useful to monitor and troubleshoot any job failures. They can be found under Workflows. Workspace admins role is required to have full access on all jobs unless explicitly granted to the user by Job Owner/admin. 

Review the job run logs to understand the execution flow. Look for:

  • Start and end times to identify long-running tasks.
  • Status messages for success or failure indications.
  • Task retries and their reasons.

MadhuB_0-1750860873314.png

https://docs.databricks.com/aws/en/jobs/repair-job-failures

 

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now