cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Machine Learning
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

TypeError: 'JavaPackage' object is not callable

Anil_M
New Contributor II

Hi Team,

I am facing issue with above error while I am trying to do BERT embeddings, by specifying the model path and it is giving error while downloading the model.

spark version is 3.3.0

Can any one of you help me on this?

8 REPLIES 8

Anil_M
New Contributor II

I am using AWS databricks and pyspark

feiyun0112
Contributor III

please give error detail.

i think you need add some Maven package, like https://community.databricks.com/t5/machine-learning/synapse-ml-typeerror-javapackage-object-is-not-...

Anil_M
New Contributor II

Hi @feiyun0112 

I am using AWS databricks, but above one related to Azure
correct me if my understanding is wrong.


@Anil_M wrote:

Hi @feiyun0112 

I am using AWS databricks,This question has nothing to do with the cloud platform, it is that your code needs the corresponding maven package but above one related to Azure
correct me if my understanding is wrong.


This question has nothing to do with the cloud platform, it is that your code needs the corresponding maven package

please check the error details

Anil_M
New Contributor II

@feiyun0112 wrote:

@Anil_M wrote:

Hi @feiyun0112 

I am using AWS databricks,This question has nothing to do with the cloud platform, it is that your code needs the corresponding maven package but above one related to Azure
correct me if my understanding is wrong.


This question has nothing to do with the cloud platform, it is that your code needs the corresponding maven package

please check the error details


 

Plea

 

 

Hi @feiyun0112 @@please find the below attached error details and let me know the root cause of the issue.

Thank you!


@Anil_M wrote:

@feiyun0112 wrote:

@Anil_M wrote:

Hi @feiyun0112 

I am using AWS databricks,This question has nothing to do with the cloud platform, it is that your code needs the corresponding maven package but above one related to Azure
correct me if my understanding is wrong.


This question has nothing to do with the cloud platform, it is that your code needs the corresponding maven package

please check the error details


 

Plea

 

 

Hi @feiyun0112 @@please find the below attached error details and let me know the root cause of the issue.

Thank you!


 

 

Spark NLP - Installation

Kaniz
Community Manager
Community Manager

Hi @Anil_M, Absolutely, the error message you're facing, "TypeError: 'JavaPackage' object is not callable," usually arises when there's a problem with how you're calling a method or function in your code. Let's dive into troubleshooting this issue together: Firstly, it's crucial to check if you've installed all the required dependencies. BERT embeddings might need specific libraries or packages, so ensure they've been correctly installed. Secondly, please verify that you possess the necessary versions of Spark and PySpark. As you mentioned, you're using Spark version 3.3.0, which should be compatible with BERT embeddings.

 

To ensure successful model retrieval, double-check that your specified model path accurately leads to the stored BERT model. Verify that your Databricks cluster has proper access and authorization to the model files. Additionally, be mindful of library versions and their potential for compatibility conflicts. Validate that PySpark, BERT, and any relevant libraries are all compatible with one another. In the case of custom libraries, be certain that they are properly installed and accessible within your notebook.

 

When setting up your cluster, it's important to keep in mind that libraries installed within a notebook are not permanent and will need to be reinstalled at the start of each session or when the notebook is connected to a cluster. Before troubleshooting, ensure that the cluster has not been restarted or detached since installing the necessary libraries. 

 

To possibly solve this issue, users have shared their success with installing deequ-2.0.1-spark-3.2 on their cluster. It could be worth trying out this approach. Keep in mind that troubleshooting specific problems often requires delving into logs and carefully analyzing your code's context. Don't hesitate to provide more details or ask any additional questions โ€“ let's work together to get to the bottom of this! ๐Ÿ˜Š

 

Kaniz
Community Manager
Community Manager

Hey there! Thanks a bunch for being part of our awesome community! ๐ŸŽ‰ 

We love having you around and appreciate all your questions. Take a moment to check out the responses โ€“ you'll find some great info. Your input is valuable, so pick the best solution for you. And remember, if you ever need more help , we're here for you! 

Keep being awesome! ๐Ÿ˜Š๐Ÿš€

 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.