cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

amal15
by New Contributor II
  • 256 Views
  • 2 replies
  • 0 kudos

error: not found: type XGBoostEstimator

error: not found: type XGBoostEstimator Spark & Scala  

  • 256 Views
  • 2 replies
  • 0 kudos
Latest Reply
shan_chandra
Esteemed Contributor
  • 0 kudos

@amal15 - can you please include the below to the import statement and see if it works. ml.dmlc.xgboost4j.scala.spark.XGBoostEstimator 

  • 0 kudos
1 More Replies
Leo69
by New Contributor
  • 162 Views
  • 1 replies
  • 0 kudos

"error_code":"INVALID_PARAMETER_VALUE","message":"INVALID_PARAMETER_VALUE: Failed to generate access

Hello everyone,I have an Azure Databricks subscription with my company, and I want to use external LLMs in databricks, like claude-3 or gemini. I managed to create a serving endpoint for Anthropic and I am able to use claude 3.But I want to use a Gem...

  • 162 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Leo69, It seems you’re encountering an issue while trying to use the Gemini model through Databricks. Let’s troubleshoot this together! First, let’s review some important information about external models in Databricks Model Serving. External...

  • 0 kudos
kapwilson
by New Contributor II
  • 445 Views
  • 1 replies
  • 1 kudos

Resolved! How to fine-tune OpenAI’s large language models (LLMs)

I am looking for the more detailed resources comparing RAG to fine-tuning methods in AI models to processing text data with LLM in laymen notes. I have found one resource but looking for the more  detailed view https://www.softwebsolutions.com/resour...

  • 445 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @kapwilson, It seems you’re encountering an issue with using archive files in your Spark application submitted as a Jar task. Archive Files in Spark Applications: When submitting Spark applications, you can include additional files (such as Pyt...

  • 1 kudos
Sam
by New Contributor III
  • 691 Views
  • 1 replies
  • 0 kudos

MLFlow connection pool warning

Hi,I have a transformer model from Hugging Face I have logged to MLFlow.When I load in using mlflow.transformers.load_model I receive a bunch of warnings: WARNING:urllib3.connectionpool:Connection pool is full, discarding connection: xxxx. Connection...

  • 691 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Sam, The warnings you’re encountering are related to urllib3, which is a Python library for handling HTTP connections. Let’s break down the issue and explore potential solutions: Connection Pool Warnings: The warning message indicates that th...

  • 0 kudos
GKH
by New Contributor II
  • 1409 Views
  • 1 replies
  • 0 kudos

Errors using Dolly Deployed as a REST API

We have deployed Dolly (https://huggingface.co/databricks/dolly-v2-3b) as a REST API endpoint on our infrastructure. The notebook we used to do this is included in the text below my question.The Databricks infra used had the following config -  (13.2...

  • 1409 Views
  • 1 replies
  • 0 kudos
Latest Reply
marcelo2108
Contributor
  • 0 kudos

I had a similar problem when I used HuggingFacePipeline(pipeline=generate_text) with langchain. It worked to me when I tried to use HuggingFaceHub instead. I used the same dolly-3b model.

  • 0 kudos
Amoozegar
by New Contributor II
  • 391 Views
  • 1 replies
  • 0 kudos

Error in Tensorflow training job

I upgraded Tensorflow on Databricks notebook using %pip command. Now when running the training job, I get this error: "DNN library initialization failed."

Machine Learning
GPU enabled clusters
Tensorflow
  • 391 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Amoozegar,  Check TensorFlow Version: Ensure that the TensorFlow version you upgraded to is compatible with your existing code and dependencies. Sometimes, upgrading TensorFlow can lead to compatibility issues. You might want to verify if the sp...

  • 0 kudos
yhyhy3
by New Contributor III
  • 482 Views
  • 1 replies
  • 0 kudos

Foundation Model APIs HIPAA compliance

I saw that Foundation Model API  is not HIPAA compliant. Is there a timeline in which we could expect it to be HIPAA compliant? I work for a healthcare company with a BAA with Databricks.

  • 482 Views
  • 1 replies
  • 0 kudos
Latest Reply
saikumar246
New Contributor III
  • 0 kudos

Hi @yhyhy3  Foundation Model API's HIPAA certification:AWS: e.t.a. March 2024Azure: e.t.a. Aug 2024 HIPAA certification is essentially having a third party audit report for HIPAA.  That is not the date that a HIPAA product offering may/will necessari...

  • 0 kudos
YanivShani
by New Contributor
  • 627 Views
  • 2 replies
  • 0 kudos

inference table not working

Hi,I'm trying to enable inference table for my llama_2_7b_hf serving endpoint, however I'm getting the following error:"Inference tables are currently not available with accelerated inference." Anyone one have an idea on how to overcome this issue? C...

  • 627 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?This...

  • 0 kudos
1 More Replies
BR_DatabricksAI
by Contributor
  • 867 Views
  • 2 replies
  • 1 kudos

Custom deployment of LLM model in Databricks

Can we deploy our own Custom LLM model in Databricks? If anyone has any material or link, please share with me. 

  • 867 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @BR_DatabricksAI, Yes, you can deploy your own custom Large Language Model (LLM) in Databricks.    Here are some key points:   Databricks Model Serving: Databricks Model Serving supports the deployment of open-source or your own custom AI models o...

  • 1 kudos
1 More Replies
m12
by New Contributor II
  • 4286 Views
  • 3 replies
  • 2 kudos

Resolved! Enabling vector search in the workspace

Hi,I'm testing out LLM/RAG Databricks demo here: https://notebooks.databricks.com/demos/llm-rag-chatbot/index.html?_gl=1*1nj8hq2*_gcl_au*MTcxOTY0MDY4LjE2OTQ2MzgwNDU.# As part of the demo, I'm trying to create a vector search with the line below. vsc....

  • 4286 Views
  • 3 replies
  • 2 kudos
Latest Reply
Kumaran
Valued Contributor III
  • 2 kudos

Hi @m12, Thank you for posting your question in the Databricks community. The vector search feature is currently undergoing a private preview. If you wish to participate, kindly complete the form provided below for onboarding. https://docs.google.com...

  • 2 kudos
2 More Replies
Rajaniesh
by New Contributor III
  • 1261 Views
  • 2 replies
  • 3 kudos

Databricks assistant not enabling

 Hi,I have gone thru the databricks assistant article by Databricks https://docs.databricks.com/notebooks/notebook-assistant-faq.htmlIt clearly states that :Q: How do I enable Databricks Assistant?An account administrator must enable Databricks Assis...

  • 1261 Views
  • 2 replies
  • 3 kudos
Latest Reply
Kumaran
Valued Contributor III
  • 3 kudos

Hi @Rajaniesh,Databricks assistant is available now live. Please check the below blog for more details.More_details  

  • 3 kudos
1 More Replies
phdykd
by New Contributor
  • 3380 Views
  • 1 replies
  • 0 kudos

Cannot re-initialize CUDA in forked subprocess.

This is the error I am getting :"RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method". I am using 13.0nc12s_v3 Cluster.I used this one :"import torch.multiprocessing as...

  • 3380 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kumaran
Valued Contributor III
  • 0 kudos

Hi @phdykd,Thank you for posting your question in the Databricks community.One approach is to include the start_method="fork" parameter in the spawn function call as follows: mp.spawn(*prev_args, start_method="fork"). Although this will work, it migh...

  • 0 kudos
Icen
by New Contributor
  • 360 Views
  • 0 replies
  • 0 kudos

Data+AI summit Expo

It's a great experience here to learn all the fast moving pieces on both open/close source tools to speed up LLM usage in industry. Out of curiosity, any company already started with LLM agent with success? 

  • 360 Views
  • 0 replies
  • 0 kudos
Labels