So I am creating a POV on Databricks' agentic capabilities and wanted to showcase its abilities through a simple change pipeline.
A user asks for changes in a specific table in a schema -> based on metadata info from our lake table info is received -> the sql for this table is fetched from a repository such as github/bitbucket -> changes are done to the sql and tested -> The modified sql is then pushed back to the repo.
The approach I am currently thinking of is to do it through the data bricks assistant Data Science agent and providing it python functions as tool calls and allowing it to call the functions from the notebook for each of these steps.
My question is, is this viable in the first place? Also, is this the best way of tackling this use case solely through Databricks' in house agents. To give you context, the other povs we are testing for are similar coding agents such as Codex