To dynamically select a prompt template in Databricks based on the input prompt received through a legacy model serving endpoint, you can implement a Python function that maps incoming prompts to specific templates. This often involves using conditional logic or a registry of templates, and is best integrated with your endpoint scoring logic. Here is a sample code structure and process that can help achieve this:
Sample Code Using Python
# Define your prompt templates
prompt_templates = {
"classification": "Classify the following input: {{input}}",
"summarization": "Summarize this text: {{input}}",
"default": "Respond to the user's request: {{input}}"
}
def select_template(input_prompt):
"""
Selects a prompt template based on keywords in the input prompt.
Args:
input_prompt (str): The user's request or query.
Returns:
str: The selected prompt template with the input inserted.
"""
if "classify" in input_prompt.lower():
template = prompt_templates["classification"]
elif "summarize" in input_prompt.lower():
template = prompt_templates["summarization"]
else:
template = prompt_templates["default"]
return template.replace("{{input}}", input_prompt)
# Example usage for incoming request
incoming_prompt = "Please classify this sentence."
selected_prompt = select_template(incoming_prompt)
# Send selected_prompt to your model via legacy serving endpoint
This logic can be extended with more sophisticated routing, such as matching on regular expressions, using a dictionary of patterns, or integrating with a registry service like MLflow's Prompt Registry.โ
Process Overview
-
Store Prompt Templates: Define your templates in code, a config file, or use MLflow Prompt Registry for better manageability.โ
-
Detect Intent/Keyword: Analyze the incoming prompt for specific keywords or intents.
-
Select Template: Map the detected intent or keyword to the right template.
-
Render Final Prompt: Insert variables or structure the prompt appropriately.
-
Forward to Endpoint: Send the formatted prompt to your model endpoint for inference.
This approach allows your Databricks model endpoint to handle a variety of tasks by intelligently selecting how to present user queries to the underlying model.โ
MLflow/Databricks Integration
You can also leverage MLflow's Prompt Registry and Python SDK to store, retrieve, and manage different versions of prompt templates programmatically, using methods such as mlflow.set_experiment_tags, or by storing prompts in the Unity Catalog schema for more robust management.โ
This modular process will help your Databricks model serving endpoint remain flexible and responsive to a range of user inputs.