Hello Databricks Team,
We are currently evaluating AgentBricks AI Agents (for example, Knowledge Assistant and Multi-Agent Supervisor) and would like to better understand the security and model customization aspects.
Data security & privacy
What data (user prompts, retrieved context, tool outputs, intermediate agent reasoning, etc.) is transmitted or persisted when using AgentBricks AI Agents?
Is this data logged, stored, or retained by Databricks services, and if so, for how long?
How does Databricks ensure data isolation and confidentiality, especially when agents interact with external tools or services?
Using custom or non-default models with AgentBricks
Is it possible to use a custom locally hosted or self-managed model (for example, a model downloaded and hosted outside Databricks) as the backing LLM for AgentBricks AI Agents?
If this is not supported directly, what are the recommended approaches to use a model other than the default Databricks-provided models (for example, via external model endpoints, API-based integration, or other supported mechanisms)?
LLM Judges / evaluation models
Do the same constraints and options apply to LLM Judges used for evaluating agent or model responses?
Can LLM Judges be configured to use a non-default or externally hosted model, and are there any specific security or compliance considerations for this setup?
Any guidance, documentation references, or best-practice recommendations would be greatly appreciated.
Thank you in advance for your support.