Hi @daan_dw ,To reference variables defined in your databricks.yml in Python DAB code, define your variables class and use bundle.resolve_variablehttps://docs.databricks.com/aws/en/dev-tools/bundles/python/#access-bundle-variables
Hi @GiriSreerangam ,You can try using agent code tools to define Python functions directly within your agent for any custom logic, including Databricks SDK or REST API calls.https://docs.databricks.com/aws/en/generative-ai/agent-framework/agent-toolA...
Hi @stucas ,Adding following configuration spark.databricks.delta.schema.autoMerge.enabled = trueto the DLT pipeline will allow new pivoted columns to be merged into the target table automatically. However, DLT still requires a defined schema at init...
Hi @ShankarM ,There isn’t a direct way to package a Databricks notebook so that it can only be executed without exposing the code. The suggested way is to move your sensitive logic into a Python/Scala/Java package (for example, a .whl or .jar), uploa...