cancel
Showing results for 
Search instead for 
Did you mean: 
Generative AI
Explore discussions on generative artificial intelligence techniques and applications within the Databricks Community. Share ideas, challenges, and breakthroughs in this cutting-edge field.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Sujitha
by Databricks Employee
  • 33151 Views
  • 22 replies
  • 46 kudos

Databricks Announces the Industry’s First Generative AI Engineer Learning Pathway and Certification

Today, we are announcing the industry's first Generative AI Engineer learning pathway and certification to help ensure that data and AI practitioners have the resources to be successful with generative AI. At Databricks, we recognize that generative ...

Screenshot 2024-01-24 at 11.32.01 PM.png
  • 33151 Views
  • 22 replies
  • 46 kudos
Latest Reply
Darshan137
New Contributor II
  • 46 kudos

Dear Certifications TeamI have completed full Generative AI Engineering Pathway, so I received module wise knowledge badge but I didn't received the overall certificate which mentioned in description which is Generative AI Engineer with one Star. Req...

  • 46 kudos
21 More Replies
Yash01Kumar12
by Databricks Partner
  • 1044 Views
  • 3 replies
  • 0 kudos

Agent Bricks Information Extraction

I am facing some problem in Information extraction using PDF. I have done all the necessary steps. 1) I loaded the data in Volume.2) I ran the Use PDF's functionality to create a structure table of the PDFs3) I now have the table with the column name...

Yash01Kumar12_0-1760117409670.png Yash01Kumar12_1-1760117530018.png
  • 1044 Views
  • 3 replies
  • 0 kudos
Latest Reply
Yash01Kumar12
Databricks Partner
  • 0 kudos

Thank You @NandiniN for responding. For now, I have changed the type of "raw_parsed" to string and created a vector index after that.But can you help me with one more thing. I am creating a Multi-Agent-Supervisor, can you explain me the need of OBO U...

  • 0 kudos
2 More Replies
JohnnyA
by New Contributor II
  • 2112 Views
  • 2 replies
  • 2 kudos

Resolved! Using Genie Conversational API with External Users and Data-Level Security

We are planning to implement a chat interface in our portal application using the Genie Conversational API, where clients, partners, and internal users can ask questions in natural language and receive answers based on our data.I have the following q...

  • 2112 Views
  • 2 replies
  • 2 kudos
Latest Reply
Isi
Honored Contributor III
  • 2 kudos

Hello @JohnnyA I'll try to explain ideas and hope something works for you because I don't have the whole context.1) Authentication & authorization for external usersRecommended (best practice):Federated identity + OBO. Your portal authenticates with ...

  • 2 kudos
1 More Replies
ChrisLawford_n1
by Contributor II
  • 2031 Views
  • 2 replies
  • 0 kudos

Resolved! Clarification on Databricks Claude 3.7 Sonnet native serving endpoint data processing location.

Hello, I am trying to clarify if Databricks Claude 3.7 Native serving endpoint is safe for use in our enterprise or if it needs to be disabled.I am trying to understand if the data sent to this endpoint is kept within our geo region (UKSouth/Europe)....

ChrisLawford_n1_0-1753277934139.png
  • 2031 Views
  • 2 replies
  • 0 kudos
Latest Reply
Krishna_S
Databricks Employee
  • 0 kudos

Hi @ChrisLawford_n1  The Databricks Claude 3.7 for UKsouth, as you can see in the documentation, can only be used if it is supported based on GPU availability and requires cross-geo routing to be enabled, so cross-geo processing that allows data for ...

  • 0 kudos
1 More Replies
Dulce42
by New Contributor II
  • 6341 Views
  • 1 replies
  • 1 kudos

Resolved! API GENIE

Hi community!Yesterday I tried extract history chat from my genie spaces but I can't export chats from other users, I have the next error:{'error_code': 'PERMISSION_DENIED', 'message': 'User XXXXXXXX does not own conversation XXXXXXXX', 'details': [{...

  • 6341 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

Based on current documentation and available resources, exporting chat histories from Genie Spaces is restricted by ownership rules: only the user who owns the conversation can export that specific chat history, regardless of admin permissions or wor...

  • 1 kudos
PiotrM
by New Contributor III
  • 2254 Views
  • 7 replies
  • 10 kudos

Resolved! ai_query not affected by AI gateway's rate limits?

Hey, We've been testing the ai_query (Azure Databricks here) on preconfigured model serving endpoints likedatabricks-meta-llama-3-3-70b-instruct and the initial results look nice. I'm trying to limit the number of requests that could be sent to those...

  • 2254 Views
  • 7 replies
  • 10 kudos
Latest Reply
PiotrM
New Contributor III
  • 10 kudos

Hey, @BS_THE_ANALYST, before writing that post, I went exactly through the docs you've posted. I wasn't able to find a specific confirmation (or denial) that this function will be affected by the rate limits, which led me to believe that it's worth a...

  • 10 kudos
6 More Replies
qlmahga2
by New Contributor III
  • 1625 Views
  • 2 replies
  • 1 kudos

Finally fix Claude Opus

It's been almost 3 months since the announcement of First-party Anthropic Claude Opus 4 on Databricks Mosaic AI Model Serving, but the model is still unavailable.It's listed in the pricing and documentation, but on the Serving endpoints page, it's be...

  • 1625 Views
  • 2 replies
  • 1 kudos
Latest Reply
jack_zaldivar
Databricks Employee
  • 1 kudos

Hi @qlmahga2 , it appears that Claude Sonnet 4 and Claude Sonnet 4.5 are both currently available. I'm not sure what it looked like when you asked this question, as I didn't check at that time. However, it looks like they should be available now. Can...

  • 1 kudos
1 More Replies
Nobuhiko
by New Contributor III
  • 3193 Views
  • 8 replies
  • 8 kudos

Resolved! Can I get notebooks used in a e-learning video?

I'm currently watching the videos in the "Generative AI Engineering Pathway."In the "Demo" chapters, it appears that the instructor is explaining based on pre-prepared notebooks (for example, a notebook named "2.1 - Preparing Data for RAG"). Would it...

  • 3193 Views
  • 8 replies
  • 8 kudos
Latest Reply
Nobuhiko
New Contributor III
  • 8 kudos

Hello @Advika !Could you provide details about "the lab materials"?Are they just notebooks, or do they include other materials like raw data files to be processed, environment variables (like "DA"), scripts, etc.?I'm very new to Databricks, so it wou...

  • 8 kudos
7 More Replies
GiriSreerangam
by Databricks Partner
  • 1813 Views
  • 4 replies
  • 2 kudos

AI Agents - calling custom code and databricks jobs

Hi EveryoneI am building AI Agents; my requirement is to call custom tool logic (which was not possible using a unity catalog function) and databricks jobs. I could not find much of documentation on these scenarios. If someone could share any referen...

  • 1813 Views
  • 4 replies
  • 2 kudos
Latest Reply
jamesl
Databricks Employee
  • 2 kudos

Hi @GiriSreerangam , can you share what you are trying to do with the custom tool? You might be able to implement with a custom MCP server. Here are some other resources that may help:  https://github.com/JustTryAI/databricks-mcp-serverhttps://github...

  • 2 kudos
3 More Replies
Phani1
by Databricks MVP
  • 2764 Views
  • 2 replies
  • 2 kudos

Export/Share Genie Space Across DEV, QA, and PROD Environments

Hi Team,What is the procedure for exporting a Genie space across multiple workspace environments such as DEV, QA, and PROD?Can you provide any details around this. Thanks ,Phani

  • 2764 Views
  • 2 replies
  • 2 kudos
Latest Reply
spoltier
New Contributor III
  • 2 kudos

@Pilsner When I try to export, I get Error: dataRoom is not user-facing. Trying to download in the workspace browser results in a 400 Bad Request.The bundle documentation for AWS and Azure does not mention genie spaces as existing resources.See the r...

  • 2 kudos
1 More Replies
brahaman
by New Contributor II
  • 788 Views
  • 1 replies
  • 1 kudos

Resolved! Vectorisation job automatisation and errors

Hey there ! So I'm fairly new to AI and RAG, and at this moment I'm trying to automatically vectorise documents (.pdf, .txt, etc...) each time a new file comes in a volume that I created.For that I created, a job that's triggered each time a new file...

  • 788 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

To address the question about automating and optimizing document vectorization pipelines (PDF, TXT, etc.) like the Databricks unstructured data pipeline with challenges around HuggingFace model downloads and job flexibility, here are insights and alt...

  • 1 kudos
prakashhinduja1
by New Contributor
  • 643 Views
  • 1 replies
  • 1 kudos

Resolved! Prakash Hinduja Switzerland (Swiss) How can I manage spending while optimizing compute resources?

Hi I am Prakash Hinduja Visionary Financial Strategist, born in Amritsar (India) and now lives in Geneva, Switzerland (Swiss) I’m looking for advice on how to better manage costs in Databricks while still keeping performance efficient. If you’ve foun...

  • 643 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

To optimize costs in Databricks while maintaining strong performance, consider a blend of strategic cluster configurations, autoscaling, aggressive job scheduling, and robust monitoring tools. These proven practices are used by leading enterprises in...

  • 1 kudos
chunky35
by New Contributor
  • 2222 Views
  • 1 replies
  • 1 kudos

Resolved! streaming llm response

I am deploying an agent that works good withouth streaming:it is using the following packages:      "mlflow==2.22.1",      "langgraph",      "langchain",      "pydantic==2.8.2",      "langgraph-checkpoint-sqlite",      "databricks-langchain",      "p...

  • 2222 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

To implement streaming output for your agent in Databricks and resolve the error "This model does not support predict_stream method.", the key requirement is that your underlying MLflow model must support the predict_stream method. Most l...

  • 1 kudos
HemantvIkani32
by New Contributor II
  • 1620 Views
  • 5 replies
  • 3 kudos

Model deprecation issue while serving on Databricks

I am facing this below error while serving codellama model:Exception: Request failed with status 400, {"error_code":"INVALID_PARAMETER_VALUE","message":"The provisioned throughput model Llama 2 7B is deprecated and no longer supported in serving. See...

  • 1620 Views
  • 5 replies
  • 3 kudos
Latest Reply
Advika
Community Manager
  • 3 kudos

Hello @HemantvIkani32! Did the responses shared above help resolve your concern? If yes, please consider marking the relevant response(s) as the accepted solution.

  • 3 kudos
4 More Replies
dk_g
by New Contributor
  • 751 Views
  • 1 replies
  • 0 kudos

How to utilize clustered gpu for large hf models

Hi,I am using clustered GPU(driver -1GPU and Worker-3GPU), and caching model data into unity catalog but while loading model checkpoint shards its always use driver memory and failed due insufficient memory.How to use complete cluster GPU while loadi...

  • 751 Views
  • 1 replies
  • 0 kudos
Latest Reply
lin-yuan
Databricks Employee
  • 0 kudos

1. Are you using any of the model parallel library, such as FSDP or DeepSpeed? Otherwise, every GPU will load the entire model weights.  2. If yes in 1, Unity Catalog Volumes are exposed on every node at /Volumes/<catalog>/<schema>/<volume>/..., so w...

  • 0 kudos
gascarin
by New Contributor II
  • 1787 Views
  • 3 replies
  • 1 kudos

AgentBricks Query Error

Does anyone got:[REMOTE_FUNCTION_HTTP_FAILED_ERROR]The remote HTTP request failed with code 400, and error message 'HTTP request failed with status: {"error_code": "BAD_REQUEST", "message": "Errored due to unknown error. Please contact Databricks sup...

  • 1787 Views
  • 3 replies
  • 1 kudos
Latest Reply
Krishna_S
Databricks Employee
  • 1 kudos

Hi @gascarin  Is this endpoint optimized or the base endpoint? Try optimizing the endpoint and check it again. Are you running ai_query? Can you please run it with a very simple and short prompt to see if it works? 

  • 1 kudos
2 More Replies