- 33143 Views
- 22 replies
- 46 kudos
Databricks Announces the Industry’s First Generative AI Engineer Learning Pathway and Certification
Today, we are announcing the industry's first Generative AI Engineer learning pathway and certification to help ensure that data and AI practitioners have the resources to be successful with generative AI. At Databricks, we recognize that generative ...
- 33143 Views
- 22 replies
- 46 kudos
- 46 kudos
Dear Certifications TeamI have completed full Generative AI Engineering Pathway, so I received module wise knowledge badge but I didn't received the overall certificate which mentioned in description which is Generative AI Engineer with one Star. Req...
- 46 kudos
- 216 Views
- 1 replies
- 0 kudos
Error: "Invalid model name" in Databricks AI Gateway when setting up Vertex AI endpoint
Hi everyone,I'm trying to set up a new serving endpoint in Databricks using Google Cloud Vertex AI as the provider. I want to route it to claude-opus-4-6.However, as soon as I try to create it, I get the following UI error (see screenshot):"Invalid m...
- 216 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @martkev -- good news: your model name is correct. The Vertex AI model ID for Claude Opus 4.6 is indeed claude-opus-4-6 (per Anthropic's official documentation). The issue is on the Databricks side -- the UI enforces an internal allowlist of recog...
- 0 kudos
- 390 Views
- 1 replies
- 0 kudos
Petition to update your documentation about inferencing LLMs and standardize LLM response format
TL;DRInconsistent & Undocumented Response Format: Contrary to Databricks documentation, enabling thinking.type for Gemini 2.5 Flash changes the ChatCompletion content field from a standard string to a list of dictionaries, breaking expected behavior....
- 390 Views
- 1 replies
- 0 kudos
- 0 kudos
Hey @trickywhitecat , You're right that when thinking.type is enabled for Gemini 2.5 Flash on a Databricks serving endpoint, the content field comes back as a list of dictionaries instead of a plain string. That breaks the expected OpenAI ChatComplet...
- 0 kudos
- 348 Views
- 1 replies
- 0 kudos
Resolved! ai_query and cached tokens
Is ai_query actually able to use OpenAI's cached tokens? I was not unable to prove it. The response object from ai_query does not contain the raw response, and when I re-run an identical request via OpenAI SDK (identical model, settings etc.) and exa...
- 348 Views
- 1 replies
- 0 kudos
- 0 kudos
Great question -- this is a nuanced topic because there are two layers involved: Databricks' proxy layer and OpenAI's caching mechanism. Short answer: No, ai_query does not currently support OpenAI's prompt caching. 1. ai_query doesn't expose token u...
- 0 kudos
- 591 Views
- 3 replies
- 0 kudos
Tracing through model serving endpoint
i have deployed a code running on langgraph through model serving endpoint. I want to trace the logs using ml flow and i want to trace logs in the experiment whenever a user hits the serving endpoint. I have defined both of them in my codemlflow.set...
- 591 Views
- 3 replies
- 0 kudos
- 0 kudos
Hi @srijan1881, The behavior you are seeing is expected when using a manually created model serving endpoint rather than one deployed through the Databricks Agent Framework. Here is a breakdown of why traces are not appearing and how to resolve it. U...
- 0 kudos
- 653 Views
- 4 replies
- 0 kudos
Resolved! Genie integration in Dashboards fails with "GenericChatCompletionServiceException" (Free Edition)
I am experiencing a persistent error when using Genie Space integration within a Databricks Dashboard (v3/Lakeview). While Genie works perfectly as a standalone Space, it consistently fails when invoked via the Dashboard sidebar.Environment:Edition: ...
- 653 Views
- 4 replies
- 0 kudos
- 0 kudos
Yes 100%, any user would need to have select permissions on the table that the Genie Space is created on top of.
- 0 kudos
- 3133 Views
- 4 replies
- 6 kudos
Managed MCP Server for Visual Studio Code and GitHub Copilot?
Hi!I am starting to explore the new managed Model Context Protocol (MCP) server with GitHub Copilot. I have successfully configured it to use the DBSQL MCP Server that you currently find in AI/ML -> Agents -> MCP Servers. As also shown in this post i...
- 3133 Views
- 4 replies
- 6 kudos
- 6 kudos
Hi excavator-matt, Thanks for the follow-up and glad to hear you got Option C (PAT-based) working with Copilot and VSCode, and that you have moved to Claude Code with the official skills. Regarding the issues with Options A and B: Option A (OAuth U2M...
- 6 kudos
- 197 Views
- 1 replies
- 0 kudos
Ethical Data Governance
Title: Why Responsible AI Needs to Be a First‑Class Engineering Practice (Not an AfterthoughAI teams are moving faster than ever — but the industry is learning that speed without governance creates real downstream risk. Most “Responsible AI” failures...
- 197 Views
- 1 replies
- 0 kudos
- 0 kudos
Appreciate anyone who reads through this. I’m curious how teams are implementing governance controls in Databricks today — things like automated validation, model documentation, or lineage tracking through Unity Catalog. If you’ve built guardrails th...
- 0 kudos
- 571 Views
- 4 replies
- 3 kudos
Testing ai_parse_document vs PyMuPDF for PDF extraction
I’ve been experimenting with the Databricks AI functions and recently ran a small test extracting structured information from a PDF document.My initial approach was to use ai_parse_document to extract the text from the PDF.While the function appeared...
- 571 Views
- 4 replies
- 3 kudos
- 3 kudos
Great observations — this is a pattern several of us have run into. The short answer is: your PyMuPDF + ai_query workflow is the right approach for digitally-born PDFs, and here's why. Why aiparsedocument can get names/identifiers wrong ai_parse_docu...
- 3 kudos
- 725 Views
- 1 replies
- 1 kudos
Resolved! Claude Code User Usage Tracking
I am using Databricks Model Serving as a proxy to connect Claude Code. I established the connection through Integrate coding agents (Connect coding agents to Databricks) by generating the environment configuration: { "env": { "ANTHROPIC_MODEL": "dat...
- 725 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @JoaoPigozzo, After some investigation, I found that when you use Databricks Model Serving as a proxy for Claude Code, what you see in system.billing.usage is expected. That table is is designed for cost attribution by SKU / endpoint, not per‑user...
- 1 kudos
- 1397 Views
- 2 replies
- 0 kudos
Resolved! DatabricksVectorSearch seems to crash when served
Hey y'all !So I'm experimenting with the Databricks' DatanircksVectorSearch class in Python to serve as a tool that can be used by an agent. When I run it on a notebook, I get the following error:"[NOTICE] Using a notebook authentication token. Recom...
- 1397 Views
- 2 replies
- 0 kudos
- 0 kudos
@Louis_Frolio Are there audit logs or visibility into which service principal is being used and what access it has?Another question: With automatic authentication passthrough, do we still need to set DATABRICKS_HOST and DATABRICKS_TOKEN as environme...
- 0 kudos
- 505 Views
- 1 replies
- 1 kudos
Resolved! Databricks Apps Streaming issue
I have .NextJS Databricks Apps with streaming enabled by setting "stream:true" in the JSON body which tells serve to return a streaming response (SSE format). Now this works just fine when I run the app locally via npm run dev but once I deploy the ...
- 505 Views
- 1 replies
- 1 kudos
- 1 kudos
What resolved this issue is to set the following headers listed below. Content-Type: "text/event-stream"Connection: "keep-alive"Transfer-Encoding: "chunked"These headers help prevent proxy interference:Connection: "keep-alive" explicitly tells proxi...
- 1 kudos
- 464 Views
- 2 replies
- 1 kudos
Resolved! Feedback not showing up in Genie from Copilot Studio Genie Agent
Hi,We've created an Agent using Copilot Studio for Genie and integrated with Teams Channel.The feedback there is working and we can see the reactions in the Copilot Studio Analytics.But the feedback is not going to the actual genie space, neither the...
- 464 Views
- 2 replies
- 1 kudos
- 1 kudos
Hi @souravg, @Ale_Armillotta is right. At the moment, Genie only records feedback (thumbs up/down, "Fix it", comments) when it’s given directly in the Genie UI. The public Genie Conversation APIs that Copilot Studio/Teams use don’t expose any endpoin...
- 1 kudos
- 638 Views
- 1 replies
- 1 kudos
Resolved! How to get MLflow OpenAI autolog traces from PySpark mapInPandas workers (and some pitfalls)
ContextI'm running an LLM pipeline on Databricks that distributes OpenAI API calls across Spark workers via mapInPandas. Getting mlflow.openai.autolog() to work on workers required solving three undocumented issues. Sharing here since I couldn't find...
- 638 Views
- 1 replies
- 1 kudos
- 1 kudos
Greetings @Jayachithra , I did some digging and came up with some helpful tips/hints to help you along. On Issue 1 (explicit MLflow context): expected behavior once you realize that mapInPandas spawns isolated Python worker processes, not threads. ...
- 1 kudos
- 1629 Views
- 20 replies
- 27 kudos
How Much Has AI Actually Changed Your Day to Day?
Community, I'm genuinely curious: Describe your workday two years ago vs. today in a sentence or two. I'll go first: then, I spent half my day context-switching between Drive, Sheets, Docs, and Slack just trying to find what I needed. Now, I vibe cod...
- 1629 Views
- 20 replies
- 27 kudos
- 27 kudos
What i think is AI hasn’t replaced the hard thinking, it has just removed a lot of the grunt work. These days I spend more time on judgment, review, and refinement, and less on searching, drafting, and repetitive setup.For Databricks specifically, th...
- 27 kudos
- 254 Views
- 2 replies
- 0 kudos
Can I publish the query stored in the "Query History"?
I’m using an agent within a Databricks app that converts plain English into SQL, executes the query against a warehouse table, and returns the results.I’d like to know if there’s a way to also surface or publish the generated SQL query within the app...
- 254 Views
- 2 replies
- 0 kudos
- 0 kudos
You may have to add a call to the Query History API with the necessary filter_by. https://docs.databricks.com/api/workspace/queryhistory/list
- 0 kudos
-
agent
2 -
agent bricks
2 -
Agent Skills
1 -
agents
2 -
AI
2 -
AI Agents
10 -
ai gateway
2 -
Anthropic
1 -
API Documentation
1 -
App
3 -
Application
1 -
Asset Bundles
1 -
Authentication
1 -
Autologging
1 -
automoation
1 -
Aws databricks
2 -
ChatDatabricks
1 -
claude
5 -
Cluster
1 -
Credentials
1 -
crewai
1 -
cursor
1 -
Databricks App
3 -
Databricks Course
1 -
Databricks Delta Table
1 -
Databricks Mlflow
1 -
Databricks Notebooks
1 -
Databricks SQL
1 -
Databricks Table Usage
1 -
Databricks-connect
1 -
databricksapps
1 -
delta sync
1 -
Delta Tables
1 -
Developer Experience
1 -
DLT Pipeline
1 -
documentation
1 -
Ethical Data Governance
1 -
Foundation Model
4 -
gemini
1 -
gemma
1 -
GenAI
11 -
GenAI agent
2 -
GenAI and LLMs
4 -
GenAI Generation AI
1 -
GenAIGeneration AI
38 -
Generation AI
2 -
Generative AI
5 -
Genie
18 -
Genie - Notebook Access
2 -
GenieAPI
4 -
Google
1 -
GPT
1 -
healthcare
1 -
Index
1 -
inference table
1 -
Information Extraction
1 -
Langchain
4 -
LangGraph
1 -
Llama
1 -
Llama 3.3
1 -
LLM
2 -
machine-learning
1 -
mcp
2 -
MlFlow
4 -
Mlflow registry
1 -
MLModels
1 -
Model Serving
3 -
modelserving
1 -
mosic ai search
1 -
Multiagent
2 -
NPM error
1 -
OpenAI
1 -
Pandas udf
1 -
Playground
1 -
productivity
1 -
Pyspark
1 -
Pyspark Dataframes
1 -
RAG
3 -
ro
1 -
Scheduling
1 -
Server
1 -
serving endpoint
3 -
streaming
2 -
Tasks
1 -
Vector
1 -
vector index
1 -
Vector Search
2 -
Vector search index
6
- « Previous
- Next »
| User | Count |
|---|---|
| 39 | |
| 28 | |
| 23 | |
| 14 | |
| 10 |