<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Error Calling Llama Guard Model from Databricks Marketplace after deploying the model in Generative AI</title>
    <link>https://community.databricks.com/t5/generative-ai/error-calling-llama-guard-model-from-databricks-marketplace/m-p/103913#M680</link>
    <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/137852"&gt;@javieryw&lt;/a&gt;,&lt;/P&gt;
&lt;P class="p1"&gt;The error you are encountering, "400 Client Error: Bad Request for url: &lt;A href="https://westus.azuredatabricks.net/serving-endpoints/llama-guard/invocations" target="_blank"&gt;&lt;SPAN class="s1"&gt;&amp;nbsp;https://westus.azuredatabricks.net/serving-endpoints/llama-guard/invocations&lt;/SPAN&gt;&lt;/A&gt;. Response text: Bad request: json: unknown field 'dataframe_split'", indicates that the payload structure you are using is not recognized by the Llama Guard Model Serving endpoint.&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;Based on the context provided, it seems that the correct payload structure for querying the Llama Guard Model does not include the "dataframe_split" field. Instead, you should use a simpler structure for the inputs.&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;import mlflow&lt;/P&gt;
&lt;P class="p1"&gt;import mlflow.deployments&lt;/P&gt;
&lt;P class="p1"&gt;import re&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;def query_llamaguard(chat, unsafe_categories=DEFAULT_UNSAFE_CATEGORY):&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;"""&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;Queries LlamaGuard model to check if the chat contains unsafe content based on predefined categories.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;"""&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;try:&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;client = mlflow.deployments.get_deploy_client("databricks")&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;# Constructing the payload for the LlamaGuard API&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;query_payload = [&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;f"""[INST] Task: Check for unsafe content in 'Agent' messages in conversations according to our safety policy.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&amp;lt;BEGIN UNSAFE CONTENT CATEGORIES&amp;gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;{unsafe_categories}&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&amp;lt;END UNSAFE CONTENT CATEGORIES&amp;gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&amp;lt;BEGIN CONVERSATION&amp;gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;{chat}&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&amp;lt;END CONVERSATION&amp;gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;Provide your safety assessment for Agent in the above conversation:&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;- First line must read 'safe' or 'unsafe'.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;- If unsafe, a second line must include a comma-separated list of violated categories. [/INST]"""&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;]&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;response = client.predict(&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;endpoint=LLAMAGUARD_ENDPOINT_NAME,&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;inputs={"prompt": query_payload}&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;)&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;# Extract the desired information from the response object&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;prediction = response.predictions[0]["candidates"][0]["text"].strip()&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;is_safe = None if len(prediction.split("\n")) == 1 else prediction.split("\n")[1].strip()&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;return prediction.split("\n")[0].lower() == 'safe', is_safe&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;except Exception as e:&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;raise Exception(f"Error in querying LlamaGuard model: {str(e)}")&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;# Example usage&lt;/P&gt;
&lt;P class="p1"&gt;safe_user_chat = [&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &lt;/SPAN&gt;{&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;"role": "user",&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;"content": "I want to love."&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &lt;/SPAN&gt;}&lt;/P&gt;
&lt;P class="p1"&gt;]&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;query_llamaguard(safe_user_chat)&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;In this updated function, the payload structure for the inputs parameter is simplified to just include the "prompt" field with the constructed query payload&lt;/P&gt;</description>
    <pubDate>Thu, 02 Jan 2025 12:53:17 GMT</pubDate>
    <dc:creator>Alberto_Umana</dc:creator>
    <dc:date>2025-01-02T12:53:17Z</dc:date>
    <item>
      <title>Error Calling Llama Guard Model from Databricks Marketplace after deploying the model</title>
      <link>https://community.databricks.com/t5/generative-ai/error-calling-llama-guard-model-from-databricks-marketplace/m-p/103865#M677</link>
      <description>&lt;P&gt;&lt;SPAN&gt;I am facing issues calling the Llama Guard Model after deploying it to Model Serving on Databricks.&amp;nbsp; I deployed the model by requesting instant access from Databricks Marketplace and deployed it to Model Serving.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;I am following the code examples here:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;A href="https://github.com/databricks/databricks-ml-examples/blob/master/llm-models/safeguard/llamaguard/Llama_Guard_Demo_with_Databricks_marketplace_simplified_pii_detect.ipynb" target="_blank" rel="noopener"&gt;https://github.com/databricks/databricks-ml-examples/blob/master/llm-models/safeguard/llamaguard/Llama_Guard_Demo_with_Databricks_marketplace_simplified_pii_detect.ipynb&lt;/A&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;this is my function&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;import mlflow
import mlflow.deployments
import re

def query_llamaguard(chat, unsafe_categories=DEFAULT_UNSAFE_CATEGORY):
    """
    Queries LlamaGuard model to check if the chat contains unsafe content based on predefined categories.
    """
    try:
        client = mlflow.deployments.get_deploy_client("databricks")

        # Constructing the payload for the LlamaGuard API

        query_payload = [
           f"""[INST] Task: Check for unsafe content in 'Agent' messages in conversations according to our safety policy.
          &amp;lt;BEGIN UNSAFE CONTENT CATEGORIES&amp;gt;
          {unsafe_categories}
          &amp;lt;END UNSAFE CONTENT CATEGORIES&amp;gt;
          &amp;lt;BEGIN CONVERSATION&amp;gt;
          {chat}
          &amp;lt;END CONVERSATION&amp;gt;
          Provide your safety assessment for Agent in the above conversation:
          - First line must read 'safe' or 'unsafe'.
          - If unsafe, a second line must include a comma-separated list of violated categories. [/INST]"""
        ]

        response = client.predict(
            endpoint=LLAMAGUARD_ENDPOINT_NAME,
            inputs={"dataframe_split": {
                    "index": [0],
                    "columns": ["prompt"],
                    "data": [query_payload]
                    }
                })
        
        # Extract the desired information from the response object
        prediction = response.predictions[0]["candidates"][0]["text"].strip()
        is_safe = None if len(prediction.split("\n")) == 1 else prediction.split("\n")[1].strip()
        
        return prediction.split("\n")[0].lower()=='safe', is_safe
    
    except Exception as e:
        raise Exception(f"Error in querying LlamaGuard model: {str(e)}")&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;thereafter I call the Llama Guard Model&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;safe_user_chat = [
  {
      "role": "user",
      "content": "I want to love."
  }
]

query_llamaguard(safe_user_chat)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;This is the error I faced&amp;nbsp;Error in querying LlamaGuard model: 400 Client Error: Bad Request for url: &lt;/SPAN&gt;&lt;A class="" href="https://westus.azuredatabricks.net/serving-endpoints/llama-guard/invocations" target="_blank" rel="noopener noreferrer"&gt;https://&amp;lt;workspace&amp;gt;/serving-endpoints/llama-guard/invocations&lt;/A&gt;&lt;SPAN&gt;. Response text: Bad request: json: unknown field "dataframe_split"&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Jan 2025 07:32:07 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/error-calling-llama-guard-model-from-databricks-marketplace/m-p/103865#M677</guid>
      <dc:creator>javieryw</dc:creator>
      <dc:date>2025-01-02T07:32:07Z</dc:date>
    </item>
    <item>
      <title>Re: Error Calling Llama Guard Model from Databricks Marketplace after deploying the model</title>
      <link>https://community.databricks.com/t5/generative-ai/error-calling-llama-guard-model-from-databricks-marketplace/m-p/103913#M680</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/137852"&gt;@javieryw&lt;/a&gt;,&lt;/P&gt;
&lt;P class="p1"&gt;The error you are encountering, "400 Client Error: Bad Request for url: &lt;A href="https://westus.azuredatabricks.net/serving-endpoints/llama-guard/invocations" target="_blank"&gt;&lt;SPAN class="s1"&gt;&amp;nbsp;https://westus.azuredatabricks.net/serving-endpoints/llama-guard/invocations&lt;/SPAN&gt;&lt;/A&gt;. Response text: Bad request: json: unknown field 'dataframe_split'", indicates that the payload structure you are using is not recognized by the Llama Guard Model Serving endpoint.&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;Based on the context provided, it seems that the correct payload structure for querying the Llama Guard Model does not include the "dataframe_split" field. Instead, you should use a simpler structure for the inputs.&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;import mlflow&lt;/P&gt;
&lt;P class="p1"&gt;import mlflow.deployments&lt;/P&gt;
&lt;P class="p1"&gt;import re&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;def query_llamaguard(chat, unsafe_categories=DEFAULT_UNSAFE_CATEGORY):&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;"""&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;Queries LlamaGuard model to check if the chat contains unsafe content based on predefined categories.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;"""&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;try:&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;client = mlflow.deployments.get_deploy_client("databricks")&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;# Constructing the payload for the LlamaGuard API&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;query_payload = [&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;f"""[INST] Task: Check for unsafe content in 'Agent' messages in conversations according to our safety policy.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&amp;lt;BEGIN UNSAFE CONTENT CATEGORIES&amp;gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;{unsafe_categories}&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&amp;lt;END UNSAFE CONTENT CATEGORIES&amp;gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&amp;lt;BEGIN CONVERSATION&amp;gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;{chat}&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&amp;lt;END CONVERSATION&amp;gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;Provide your safety assessment for Agent in the above conversation:&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;- First line must read 'safe' or 'unsafe'.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;- If unsafe, a second line must include a comma-separated list of violated categories. [/INST]"""&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;]&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;response = client.predict(&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;endpoint=LLAMAGUARD_ENDPOINT_NAME,&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;inputs={"prompt": query_payload}&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;)&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;# Extract the desired information from the response object&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;prediction = response.predictions[0]["candidates"][0]["text"].strip()&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;is_safe = None if len(prediction.split("\n")) == 1 else prediction.split("\n")[1].strip()&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;return prediction.split("\n")[0].lower() == 'safe', is_safe&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;except Exception as e:&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;raise Exception(f"Error in querying LlamaGuard model: {str(e)}")&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;# Example usage&lt;/P&gt;
&lt;P class="p1"&gt;safe_user_chat = [&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &lt;/SPAN&gt;{&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;"role": "user",&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;"content": "I want to love."&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &lt;/SPAN&gt;}&lt;/P&gt;
&lt;P class="p1"&gt;]&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;query_llamaguard(safe_user_chat)&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;In this updated function, the payload structure for the inputs parameter is simplified to just include the "prompt" field with the constructed query payload&lt;/P&gt;</description>
      <pubDate>Thu, 02 Jan 2025 12:53:17 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/error-calling-llama-guard-model-from-databricks-marketplace/m-p/103913#M680</guid>
      <dc:creator>Alberto_Umana</dc:creator>
      <dc:date>2025-01-02T12:53:17Z</dc:date>
    </item>
  </channel>
</rss>

