Datbricks Notebook as a Server ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hello,
What I'm trying to do :
I have a piece of python code (an ai application) that i want to deploy as a server, almost exactly like one would lets say on an EC2. The only difference being instead of a flask api i would use dbutils and the databricks API to create a job with my code as a task and then hit the notebook on a pre attached cluster.
The challenge I'm facing :
unlike a traditional VM which would add maybe a few seconds of overhead tops, this is adding a huge overhead maybe 30 seconds or more. The actual code needs only 2 seconds but the job finishes execution in about 32 seconds or so. I'm fairly new to databricks and spark in general, so I wanted to know what is causing this underneath the hood. Also if this approach is a bad idea, could someone explain why exactly, and is there a better way of doing this on databricks ?
- Labels:
-
automoation
-
Cluster
-
JOBS
-
Scheduling
-
Server
-
Tasks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi @Krthk,
Thanks for your question. Have you thought about using Model serving endpoint for your use-case: https://docs.databricks.com/aws/en/machine-learning/model-serving/
Launching a Databricks job via the REST API and dbutils is a different paradigm compared to deploying on a traditional virtual machine (like EC2). The overhead you're observing likely comes from the following factors inherent to Databricks Jobs execution:
- Cluster Scheduling Latency: Even though you are using a "pre-attached cluster," there might still be overhead from scheduling the job on the desired cluster. This is less than the overhead of creating a cluster but still exists in comparison to a persistently running VM.
- Job Initialization: Databricks initializes and sets up the environment for the job task, which involves loading configurations, dependencies, and versions of Spark. This process is streamlined in Databricks but still adds time compared to executing directly on a running service like Flask on a VM.
• 3. Communication Overhead: The use of the REST API to trigger the notebook introduces a small latency for the API call to be processed, validated, and queued within Databricks
Launching a Databricks job via the REST API and dbutils is a different paradigm compared to deploying on a traditional virtual machine (like EC2). The overhead you're observing likely comes from the following factors inherent to Databricks Jobs execution:
- Cluster Scheduling Latency: Even though you are using a "pre-attached cluster," there might still be overhead from scheduling the job on the desired cluster. This is less than the overhead of creating a cluster but still exists in comparison to a persistently running VM.
- Job Initialization: Databricks initializes and sets up the environment for the job task, which involves loading configurations, dependencies, and versions of Spark. This process is streamlined in Databricks but still adds time compared to executing directly on a running service like Flask on a VM.
• 3. Communication Overhead: The use of the REST API to trigger the notebook introduces a small latency for the API call to be processed, validated, and queued within Databricks

