Model serving with Serverless Real-Time Inference -
How could I call the endpoint with json file consisted of raw text that need to be transformed and get the prediction?
I want to call the generated endpoint with a json file consisted of texts directly, could this endpoint take the raw texts, transform the texts into vectors and then output the prediction?
This documentation has been retired and might not be updated. The products, services, or technologies mentioned in this content are no longer supported.
The guidance in this article is for a previous preview version of the Serverless Real-Time Inference functionality. Databricks recommends you migrate your model serving workflows to the refreshed preview functionality. See Model serving with Serverless Real-Time Inference.)
Welcome to Databricks Community: Lets learn, network and celebrate together
Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections.