- 9 Views
- 0 replies
- 0 kudos
while registering model I am getting error: AssertionError:I am getting error while running the code with workflow if I running code individually with notebook then its running fine. below is the code : fe = FeatureEngineeringClient() ...
- 9 Views
- 0 replies
- 0 kudos
- 262 Views
- 2 replies
- 1 kudos
Hello! I have code to use an API supplied in the energitdso package (This is the repository https://pypi.org/project/enerbitdso/). I changed the code adapting it to AZURE DATABRICKS in python, but although there is a connection with the API, it does ...
- 262 Views
- 2 replies
- 1 kudos
Latest Reply
The owner of the package updated it to use the time out as a parameter of up to 20 seconds and updated a dependent package in DataBricks, with the above the problem was solved
1 More Replies
by
re
• New Contributor II
- 150 Views
- 2 replies
- 0 kudos
When implementing the managed VectorSearch, what is the preferred way to implement row based access control? I see that you can use the filter API during a query, so simple filters using a certain column may work, but what if all the security informa...
- 150 Views
- 2 replies
- 0 kudos
Latest Reply
Thanks AI for summarizing my question. However, you did not actually answer it.
1 More Replies
by
Lcsp
• New Contributor
- 302 Views
- 1 replies
- 0 kudos
getting this error when trying to setup the get-started-with-databricks-for-machine-learning LAB . Unity catalog is enabled. Validating the locally installed datasets: | listing local files...(0 seconds) | validation completed...(0 seconds total) C...
- 302 Views
- 1 replies
- 0 kudos
Latest Reply
PL_db
New Contributor III
It looks like you don't have the CREATE CATALOG privilege on the metastore you're trying to create the catalog in:
Privilege types by securable object in Unity Catalog
- 143 Views
- 0 replies
- 0 kudos
Hello, I'm having problems trying to run my retraining notebook for a spacy model. The notebook creates a shell file with the following lines of code: cmd = f'''
awk '{{sub("source = ","source = /dbfs/FileStore/{dbfs_folder}/textcat/categories...
- 143 Views
- 0 replies
- 0 kudos
- 105 Views
- 1 replies
- 0 kudos
Hello Databricks Community,I am currently facing a challenge in configuring a cluster for training machine learning models on a dataset consisting of approximately a billion rows and 40 features. Given the volume of data, I want to ensure that the cl...
- 105 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @moh3th1 ,
Machine Selection:
Memory (RAM): Having sufficient memory is essential for large datasets. Ensure that your machine type has enough RAM to accommodate your data.CPU: CPU power impacts data processing speed. Consider CPUs with multiple...
- 127767 Views
- 60 replies
- 3 kudos
- 127767 Views
- 60 replies
- 3 kudos
Latest Reply
Hey,I have been logged out and even the password reset email is not coming. How much time it takes to resolve?My account is ak.email86@gmail.com
59 More Replies
- 278 Views
- 4 replies
- 0 kudos
I am trying to serve a pyspark model using an endpoint. I was able to load and register the model normally. I could also load that model and perform inference but while serving the model, I am getting the following error: [94fffqts54] ERROR StatusLog...
- 278 Views
- 4 replies
- 0 kudos
Latest Reply
Hi @Shreyash, It looks like your code is encountering a java.lang.ClassNotFoundException for the com.johnsnowlabs.nlp.DocumentAssembler class while serving your PySpark model. This error occurs when the required class is not found in the classpath.
...
3 More Replies
by
amal15
• New Contributor II
- 121 Views
- 1 replies
- 0 kudos
XGBoostEstimator is not a member of package ml.dmlc.xgboost4j.scala.spark ?How can I resolve this error?
- 121 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @amal15, The error message you’re encountering, “XGBoostEstimator is not a member of package ml.dmlc.xgboost4j.scala.spark,” indicates that the XGBoostEstimator class is not being recognized within the specified package.
Check Dependencie...
- 305 Views
- 1 replies
- 0 kudos
Hello, I am training a SparkXGBRegressor model. It runs without errors if the complexity is low, however when I increase the max_depth and/or num_parallel_tree parameters, I get an error. I checked the cluster metrics during training and it doesn't l...
- 305 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @e6exghu8,
Ensure that your cluster has sufficient memory to handle the increased complexity (higher max_depth and num_parallel_tree).Check the memory configuration for your Spark executors. You might need to allocate more memory to each executor...
- 3144 Views
- 3 replies
- 2 kudos
I'm trying to delete rows from a table with the same date or id as records in another table. I'm using the below query and get the error 'Multi-column In predicates are not supported in the DELETE condition'. delete from cost_model.cm_dispatch_consol...
- 3144 Views
- 3 replies
- 2 kudos
Latest Reply
Had the same issue. Please check the subquery returned value there must be something wrong with that.
2 More Replies
by
AChang
• New Contributor III
- 1895 Views
- 2 replies
- 1 kudos
I am following along with this notebook found from this article. I am attempting to fine tune the model with a single node and multiple GPUs, so I run everything up to the "Run Local Training" section, but from there I skip to "Run distributed traini...
- 1895 Views
- 2 replies
- 1 kudos
Latest Reply
Hi AChang, have you eventually resolved the error? I've also having the same error.
1 More Replies
by
amal15
• New Contributor II
- 420 Views
- 2 replies
- 1 kudos
how i can import : import com.microsoft.ml.spark.{LightGBMClassifier,LightGBMClassificationModel}import ml.dmlc.xgboost4j.scala.spark.{XGBoostEstimator, XGBoostClassificationModel} projet spark & scala in databricks
- 420 Views
- 2 replies
- 1 kudos
Latest Reply
XGBoostEstimator is not a member of package ml.dmlc.xgboost4j.scala.spark ?How can I resolve this error?with maven : ml.dmlc:xgboost4j-spark_2.12:2.0.3
1 More Replies
- 249 Views
- 0 replies
- 0 kudos
I have a naive Bayes ML model that takes call attributes and predicts if the caller is going to abandon the call while they are on hold waiting to speak to an agent. The model lives in Databricks ML flow, I have it registered. What I need to do is ex...
- 249 Views
- 0 replies
- 0 kudos
by
amal15
• New Contributor II
- 90 Views
- 0 replies
- 0 kudos
error: not found: type XGBoostEstimator Spark & Scala
- 90 Views
- 0 replies
- 0 kudos