cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Ramakrishnan83
by New Contributor III
  • 15954 Views
  • 6 replies
  • 0 kudos

Renaming the database Name in Databricks

Team,Initially our team created the databases with the environment name appended. Ex: cust_dev, cust_qa, cust_prod.I am looking to standardize the database name as consistent name across environments. I want to rename to "cust". All of my tables are ...

  • 15954 Views
  • 6 replies
  • 0 kudos
Latest Reply
Avvar2022
Contributor
  • 0 kudos

You can also use “CASCADE” to drop schema and tables as well. It is recursive. 

  • 0 kudos
5 More Replies
eheinlein
by New Contributor
  • 712 Views
  • 0 replies
  • 0 kudos

How to confirm a workspace ID via an api token?

Hello! We are integrating with Databricks and we get the API key, workspace ID, and host from our users in order to connect to Databricks. We need the to validate the workspace ID because we do need it outside of the context of the API key (with webh...

  • 712 Views
  • 0 replies
  • 0 kudos
df_Jreco
by New Contributor II
  • 782 Views
  • 1 replies
  • 0 kudos

Custom python package iin Notebook task using bundle

Hi mates!I'n my company, we are moving our pipelines to Databricks bundles, our pipelines use a notebook that receives some parameters.This notebook uses a custom python package to apply the business logic based on the parameters that receive.The thi...

Get Started Discussions
databricks-bundles
  • 782 Views
  • 1 replies
  • 0 kudos
Latest Reply
df_Jreco
New Contributor II
  • 0 kudos

 Solved understanding the databricks.yml configuration!

  • 0 kudos
esi
by New Contributor
  • 7821 Views
  • 2 replies
  • 0 kudos

numpy.ndarray size changed, may indicate binary incompatibility

Hi All,I have installed the following libraries on my cluster (11.3 LTS that includes Apache Spark 3.3.0, Scala 2.12):numpy==1.21.4flair==0.12 ‎on executing `from flair.models import TextClassifier`, I get the following error:"numpy.ndarray size chan...

  • 7821 Views
  • 2 replies
  • 0 kudos
Latest Reply
sean_owen
Databricks Employee
  • 0 kudos

You have changed the numpy version, and presumably that is not compatible with other libraries in the runtime. If flair requires later numpy, then use a later DBR runtime for best results, which already has later numpy versions

  • 0 kudos
1 More Replies
niruban
by New Contributor II
  • 984 Views
  • 1 replies
  • 0 kudos

Databricks Asset Bundle Behaviour for new workflows and existing workflows

Dear Community Members -I am trying to deploy a workflow using DAB. After deploying if I am updating the same workflow with different bundle name it is creating a new workflow instead of updating the existing workflow. Also when I am trying to use sa...

  • 984 Views
  • 1 replies
  • 0 kudos
Latest Reply
niruban
New Contributor II
  • 0 kudos

@nicole_lu_PM : Do you have any suggestions or feedback for the above question ? It will be really helpful if we can get some insights. 

  • 0 kudos
karola61
by New Contributor II
  • 1371 Views
  • 1 replies
  • 0 kudos

org.apache.spark.SparkException: Job aborted due to stage failure:

org.apache.spark.SparkException: Job aborted due to stage failure:

  • 1371 Views
  • 1 replies
  • 0 kudos
Latest Reply
rajeshg
New Contributor II
  • 0 kudos

Along with Job aborted due to stage failure: if you see slave lost... then it is due to less memory allocated for executors, more cores per executor more memory required or the other possibility is you have used max cpu available in cluster and the d...

  • 0 kudos
Awoke101
by New Contributor III
  • 631 Views
  • 0 replies
  • 0 kudos

UC_COMMAND_NOT_SUPPORTED.WITHOUT_RECOMMENDATION in shared access mode?

I'm using a shared access cluster and am getting this error while trying to upload to Qdrant.  #embeddings_df = embeddings_df.limit(5) options = { "qdrant_url": QDRANT_GRPC_URL, "api_key": QDRANT_API_KEY, "collection_name": QDRANT_COLLEC...

Get Started Discussions
qdrant
shared_acess
UC_COMMAND_NOT_SUPPORTED.WITHOUT_RECOMMENDATION
  • 631 Views
  • 0 replies
  • 0 kudos
ganemouni
by New Contributor
  • 1489 Views
  • 1 replies
  • 0 kudos

Facing issue with Databricks Context Limit Exceeded

We have a use case , where there may be chance of 200 jobs executing at once. But Few notebooks are failing with an issue "run failed with error message Too many execution contexts are open right now.(Limit set currently to 150)." Can anyone help how...

ganemouni_0-1718803036914.png
  • 1489 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Hello, you can refer to the following KB article for information and best practices in regards the issue you are facing: https://kb.databricks.com/en_US/notebooks/too-many-execution-contexts-are-open-right-now  Best practices Use a job cluster instea...

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels