cancel
Showing results for 
Search instead for 
Did you mean: 
Generative AI
Explore discussions on generative artificial intelligence techniques and applications within the Databricks Community. Share ideas, challenges, and breakthroughs in this cutting-edge field.
cancel
Showing results for 
Search instead for 
Did you mean: 

Multi-agent chatbot Optimization

Saurabh2406
New Contributor II

We have developed a multi-agent chatbot using LangGraph within the Databricks environment. The solution is functional, but we are facing challenges related to performance observability and end-to-end optimization.

We need guidance in the following areas:

  1. Tracing and Logging Enablement
    How to implement effective distributed tracing and structured logging across LangGraph agents, Databricks components, and external model calls to identify bottlenecks.

  2. Vector Index Optimization
    Best practices for optimizing our vector index (index type selection, parameters, retrieval tuning) to improve retrieval accuracy and reduce latency.

  3. Gemini External Model API Optimization
    Recommendations on improving performance and cost efficiency of Gemini API calls, including batching, streaming, prompt optimization, and retry patterns.

  4. Response Latency Analysis & Architecture Review
    We are experiencing higher-than-expected response latency. We need help validating whether our current architecture and implementation approach is optimal, and identifying improvements if not.

Looking for expert insights, recommended configurations, code samples, or architectural guidance to help us tune the system for lower latency, better observability, and more efficient multi-agent performance.

1 REPLY 1

stbjelcevic
Databricks Employee
Databricks Employee

Hi @Saurabh2406 ,

This sounds like a fairly advanced use case - are you in touch with your account team at Databricks? They would be able to provide you with more detailed guidance on this use case. They could also get you connected with internal specialists.

In the meantime, you should take a look at these resources to help:

  • Tracing LangGraph
  • Vector Search Retrieval Quality Guide
  • Consider enabling AI Gateway for your external model endpoints to enable traffic policies, payload logging, and rate limiting
    • Other general recommendations:
      • Prefer streaming responses for chat/completions to shave tail latency
      • Batch inference: for large volumes, use SQL/Python ai_query when invoking Databricks‑hosted models (including Gemini 2.5 Pro/Flash) to process data at scale with automatic backend capacity management
  • Use MLFlow trace timelines to find slow spans

 

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now