cancel
Showing results for 
Search instead for 
Did you mean: 
Generative AI
Explore discussions on generative artificial intelligence techniques and applications within the Databricks Community. Share ideas, challenges, and breakthroughs in this cutting-edge field.
cancel
Showing results for 
Search instead for 
Did you mean: 

Testing out Agentic Capabilities

Tinjar
New Contributor

So I am creating a POV on Databricks' agentic capabilities and wanted to showcase its abilities through a simple change pipeline.

A user asks for changes in a specific table in a schema -> based on metadata info from our lake table info is received -> the sql for this table is fetched from a repository such as github/bitbucket -> changes are done to the sql and tested -> The modified sql is then pushed back to the repo.

The approach I am currently thinking of is to do it through the data bricks assistant Data Science agent and providing it python functions as tool calls and allowing it to call the functions from the notebook for each of these steps.

My question is, is this viable in the first place? Also, is this the best way of tackling this use case solely through Databricks' in house agents. To give you context, the other povs we are testing for are similar coding agents such as Codex

1 REPLY 1

AbhaySingh
New Contributor

Yes, your approach seems fairly viable . Here are some thoughts.

Step-by-Step Viability


User requests - Supported via Mosaic AI Agent Framework + ResponsesAgent interface
Metadata retrieval - Unity Catalog Functions can query INFORMATION_SCHEMA tables directly
Fetch SQL from repo - Databricks Repos API + Databricks SDK in Python UC Functions
Modify & test SQL - LLM-powered modification + SQL Statement Execution API for testing
Push to repo