- 1321 Views
- 3 replies
- 3 kudos
I'm currently immersed in a project where I'm leveraging PyTorch to develop an object detection model using satellite imagery. My immediate objective is to perform distributed training on this model using PySpark. While I have found several tutorials...
- 1321 Views
- 3 replies
- 3 kudos
Latest Reply
Hi @Jaeseon Song​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers ...
2 More Replies
by
HT
• New Contributor II
- 4289 Views
- 5 replies
- 2 kudos
Hi, I am new to LLM and am curious to try it out. I did the following code to test from the databricks website:import torch
from transformers import pipeline
instruct_pipeline = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, tr...
- 4289 Views
- 5 replies
- 2 kudos
Latest Reply
Just set the HF cache dir to a persistent path on /dbfs:import os
os.environ['TRANSFORMERS_CACHE'] = "/dbfs/..."
4 More Replies
- 1020 Views
- 1 replies
- 2 kudos
Hello!I am currently trying to use Pytorch Lightning inside Databricks and I am currently using a cluster with 2 gpus. Whenever I try to train my Transformer model with 1 gpu in DP strategy everything works fine, but when I try to use both the 2 gpus...
- 1020 Views
- 1 replies
- 2 kudos
Latest Reply
Hi @Marco Capusso​ , I am facing the similar issue could you find some fix. It would be great if you share some details around it.