cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Understanding compute requirements for Deploying Deepseek-R1-Distilled-Llama Models on databricks

kbmv
Contributor

Hi I have read the blog Deploying Deepseek-R1-Distilled-Llama Models on Databricks at https://www.databricks.com/blog/deepseek-r1-databricks

I am new to using custom models that are not available as part of foundation models.

According to the blog, I need to download a Deepseek distilled model from huggingface to my volume. Register it on my MLFlow and serve as Provisioned throughput. Can someone help me with following questions.

  1. If I want to download the 70B model, the recommended compute is g6e.4xlarge, which has 128GB CPU memory and 48GB GPU memory. To clarify, do I need this specific compute only for MLflow registration of the model?

    Additionally, the blog states:
    "You donโ€™t need GPUs per se to deploy the model within the notebook, as long as the compute has sufficient memory capacity."

    Does this refer to serving the model only? Or can I complete both MLFlow registration and deployment as serving using a compute instance with 128GB CPU memory and no GPU?

  2. For provisioned throughput of the model, when I select my registered model for serving. What will be my pricing on usage per hour? Will deepseek-r1-distilled-llama-70b pricing be same as llama 3.3 70B, and deepseek-r1-distilled-llama-8b be same as llama 3.1B as mentioned in following link or the pricing will be different? https://www.databricks.com/product/pricing/foundation-model-serving
  3. For custom rag chains or agent models, I have seen option to select Compute type as CPU, GPU small etc. Will it be such a case for my distilled model or as per point 2, if so what would be the recommendation for 70b and 8b variations. Attaching a screenshot .kbmv_0-1738846938736.png

    Thanks

1 ACCEPTED SOLUTION

Accepted Solutions

Isi
Contributor

Hi @kbmv ,

Based on my experience deploying Deepseek
-R1-Distilled-Llama on Databricks, here are my answers to your questions:

  1. Compute Requirements for MLflow Registration (70B vs 8B Model) โ€ข Llama-8B was successfully registered using a cluster with 192GB memory, 40 cores, and GPU. โ€ข Llama-70B failed to register on the same setup, indicating that it requires even more resources. โ€ข A CPU-only cluster with high memory was also tested, but it failed due to insufficient memory. โ€ข Conclusion: For me, the recommendation of using g6e.4xlarge (128GB CPU, 48GB GPU memory) seems to be the minimum needed for Llama-70B registration.
  2. GPU Requirement for Deployment โ€ข The blog states that GPUs are not strictly required for deployment if enough memory is available. โ€ข However, in practice, serving the 70B model without GPUs is not feasible due to high memory consumption and inference latency. โ€ข For Llama-8B, serving without GPUs is possible, but performance may be impacted. โ€ข Conclusion: MLflow registration is best done with GPUs, and for efficient inference serving, GPUs are strongly recommended, especially for 70B.
  3. Pricing for Provisioned Throughput Serving โ€ข As of now, Deepseek-R1-Distilled models are not explicitly listed in the pricing documentation. โ€ข However, given that Deepseek-R1-Distilled-70B is based on Llama 3.3 70B, it is likely that pricing will be similar to Llama 3.3 70B. The 8B version may align with Llama 3.1B pricing, but confirmation from Databricks would be required.
  4. Compute Selection for RAG Chains and Agent Models โ€ข For Llama-70B, the best practice is to use a GPU-enabled cluster, as inference latency will be too high on CPU. โ€ข For Llama-8B, CPU may work for some use cases, but performance will degrade significantly. โ€ข The compute type selection (CPU, GPU small, etc.) applies to Deepseek-R1 models as well, and choosing GPU is recommended for real-time applications.

    ๐Ÿ™‚

View solution in original post

2 REPLIES 2

Isi
Contributor

Hi @kbmv ,

Based on my experience deploying Deepseek
-R1-Distilled-Llama on Databricks, here are my answers to your questions:

  1. Compute Requirements for MLflow Registration (70B vs 8B Model) โ€ข Llama-8B was successfully registered using a cluster with 192GB memory, 40 cores, and GPU. โ€ข Llama-70B failed to register on the same setup, indicating that it requires even more resources. โ€ข A CPU-only cluster with high memory was also tested, but it failed due to insufficient memory. โ€ข Conclusion: For me, the recommendation of using g6e.4xlarge (128GB CPU, 48GB GPU memory) seems to be the minimum needed for Llama-70B registration.
  2. GPU Requirement for Deployment โ€ข The blog states that GPUs are not strictly required for deployment if enough memory is available. โ€ข However, in practice, serving the 70B model without GPUs is not feasible due to high memory consumption and inference latency. โ€ข For Llama-8B, serving without GPUs is possible, but performance may be impacted. โ€ข Conclusion: MLflow registration is best done with GPUs, and for efficient inference serving, GPUs are strongly recommended, especially for 70B.
  3. Pricing for Provisioned Throughput Serving โ€ข As of now, Deepseek-R1-Distilled models are not explicitly listed in the pricing documentation. โ€ข However, given that Deepseek-R1-Distilled-70B is based on Llama 3.3 70B, it is likely that pricing will be similar to Llama 3.3 70B. The 8B version may align with Llama 3.1B pricing, but confirmation from Databricks would be required.
  4. Compute Selection for RAG Chains and Agent Models โ€ข For Llama-70B, the best practice is to use a GPU-enabled cluster, as inference latency will be too high on CPU. โ€ข For Llama-8B, CPU may work for some use cases, but performance will degrade significantly. โ€ข The compute type selection (CPU, GPU small, etc.) applies to Deepseek-R1 models as well, and choosing GPU is recommended for real-time applications.

    ๐Ÿ™‚

kbmv
Contributor

Thanks @Isi, For in-detail explanation. Things are clear now.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group