by
digui
• New Contributor
- 5768 Views
- 3 replies
- 0 kudos
Hi y'all.I'm trying to export metrics and logs to AWS cloudwatch, but while following their tutorial to do so, I ended up facing this error when trying to initialize my cluster with an init script they provided.This is the part where the script fail...
- 5768 Views
- 3 replies
- 0 kudos
Latest Reply
@digui Did you figure out what to do? We're facing the same issue, the script works for the executors.I was thinking on adding an if that checks if there is log4j.properties and modify it only if it exists
2 More Replies
- 6869 Views
- 5 replies
- 4 kudos
I would like to send some custom logs (in Python) from my Databricks notebook to AWS Cloudwatch. For example: df = spark.read.json(".......................")logger.info("Successfully ingested data from json")Has someone succeeded in doing this before...
- 6869 Views
- 5 replies
- 4 kudos
Latest Reply
Hi, You can integrate, please refer: https://aws.amazon.com/blogs/mt/how-to-monitor-databricks-with-amazon-cloudwatch/ and also you can configure audit logging to S3 and redirect it to cloudwatch from AWS. , refer: https://aws.amazon.com/blogs/mt/how...
4 More Replies
- 1458 Views
- 1 replies
- 2 kudos
Hi all,I'm using the AWS CW init global script in order to monitor my clusters' instances.I'm also using data live tables with some autoloader jobs.Unfortunately, the data live tables are now running runtime version 11.As a result, newly created pipe...
- 1458 Views
- 1 replies
- 2 kudos
Latest Reply
Unfortunately, in delta live tables, you can not specify runtime (except current and preview, which you mentioned). It could be helpful that DLT runtimes releases are mentioned on the databricks side the same way as SQL, ML, and standard ones @Kaniz ...