Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
Model serving with Serverless Real-Time Inference -
How could I call the endpoint with json file consisted of raw text that need to be transformed and get the prediction?
I want to call the generated endpoint with a json file consisted of texts directly, could this endpoint take the raw texts, transform the texts into vectors and then output the prediction?
This documentation has been retired and might not be updated. The products, services, or technologies mentioned in this content are no longer supported.
The guidance in this article is for a previous preview version of the Serverless Real-Time Inference functionality. Databricks recommends you migrate your model serving workflows to the refreshed preview functionality. See Model serving with Serverless Real-Time Inference.)
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.