Model serving with Serverless Real-Time Inference -
How could I call the endpoint with json file consisted of raw text that need to be transformed and get the prediction?
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-09-2023 09:30 AM
Hi!
I want to call the generated endpoint with a json file consisted of texts directly, could this endpoint take the raw texts, transform the texts into vectors and then output the prediction?
Is there a way to support so?
Thanks in advance!!!
Labels:
- Labels:
-
Jsonfile
-
Serverless
-
Serverless Real
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2023 09:21 PM
Hi, the updated document is : https://docs.databricks.com/machine-learning/model-inference/serverless/serverless-real-time-inferen...,
(mentioned in the document stated above:
- This documentation has been retired and might not be updated. The products, services, or technologies mentioned in this content are no longer supported.
- The guidance in this article is for a previous preview version of the Serverless Real-Time Inference functionality. Databricks recommends you migrate your model serving workflows to the refreshed preview functionality. See Model serving with Serverless Real-Time Inference.)
For creating please follow: https://docs.databricks.com/machine-learning/model-inference/serverless/create-manage-serverless-end...
Please let us know if this helps.

