I’ve deployed OpenAI’s Whisper model as a serving endpoint in Databricks and I’m trying to transcribe an audio file.import whisper
model = whisper.load_model("small")
transcript = model.transcribe(
word_timestamps=True,
audio="path/to/audio...
Hi @lingareddy_Alva. I appreciate your response.I’ve looked for documentation but haven’t been able to find a solution. As you mentioned, I need to modify the serving endpoint’s inference logic to accept and handle this parameter. However, I don’t se...
Hello @lingareddy_Alva. Thank you for your response. I have deployed the Whisper model using the Serving UI, do I need to deploy with MLFlow or serve the model with a REST API?