Hello @gchandra thanks for the suggestion.. i already tried this by adding a conf property in advanced configuration in the DLT pipeline however it is not taking effect and also i don't see this property in the spark configuration in associated job compute for this pipeline.
Below is snapshot from json for DLT pipeline configuration:
"name": "axp-data-pipeline",
"edition": "ADVANCED",
"storage": "dbfs:/mnt/dlh/",
"configuration": {
"kafka.cert.secret.key": "kafka-ca-cert",
"kafka.host.secret.key": "kafka-host",
"kafka.maxOffsetsPerTrigger": "1000000",
"kafka.password.secret.key": "kafka-admin-password",
"kafka.port.secret.key": "kafka-port-sasl",
"kafka.schemaregistry.host.secret.key": "kafka-schemaregistry-host",
"kafka.schemaregistry.port.secret.key": "kafka-schemaregistry-port",
"kafka.user.secret.key": "kafka-admin-username",
"secrets.scope": "analytics-perf-dbw-green-scope",
"spark.databricks.io.cache.enabled": "true",
"topic.dimensions": "testuser.avro,testadmin,testreason-code",
"topic.normalizer": "test-incoming-feed",
"topic.raw-facts": "test1,test2,test3,test4",
"spark.sql.shuffle.partitions": "auto"
},
Pls see the attached image from job compute spark environment where the property configured in pipeline is not visible.
Am I setting this correctly or is there any other way to set it?