cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

SparkSession configuration at the Job level is not getting applied

dhinesh_balaji
New Contributor

I am trying to get the spark default metrics from the application to statsd sink at Job level not cluster level. So I configured the necessary configuration in the Spark context and spark session in code. And in a local system, which means a single node, while it is running, I am able to receive all the UDP packets. While in the AWS DataBricks (SingleNode or MultiNode) cluster, I deployed the same jar. But there, I am not getting any metrics. But if I override the Spark config in Advance Options -> Compute in Databricks. I am able to receive metrics. And configurations are reflected in the Environment Variables. Below i have attached the job level configuration code.

dhinesh_balaji_3-1688629795475.png

Databricks spark cluster config

image.png

I tried updating "metrics.properties" using InitScript in the databricks cluster.I am able to receive metrics

 

dhinesh_balaji_6-1688630322016.png

 

. And also overrode the spark session configuration again after the spark session creation. And I am printing the spark configuration values in the console. 

image.png

0 REPLIES 0
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!