- 322 Views
- 1 replies
- 0 kudos
Eorror in perform a merge inside a streaming foreachbatch using the command: microBatchDF._jdf.s
I'm trying to perform a merge inside a streaming foreachbatch using the command: microBatchDF._jdf.sparkSession().sql(self.merge_query)Streaming
- 322 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @86conventional, For clusters with Databricks Runtime version 10.5 and above, you can access the local Spark session within the foreachBatch method. If you encounter any further issues, feel free to ask for more assistance! For additional referen...
- 0 kudos
- 290 Views
- 1 replies
- 0 kudos
Is there any specific error you are receiving when running the init script? Does the run complete st
Is there any specific error you are receiving when running the init script? Does the run complete start up or fail due to the init script?
- 290 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @86conventional, If the init script fails, check the script’s content for any syntax errors or missing dependencies.Verify that the script is accessible from the Databricks cluster. You can store the script in an Azure Blob Storage or DBFS locati...
- 0 kudos
- 760 Views
- 1 replies
- 0 kudos
Spot label in pool even though the configuration selected is all on-demand
Why is there a spot label in pool even though the configuration selected is all on-demand? can someone explain?
- 760 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @hkadhao, Let me explain. When configuring a pool in Databricks, you have the option to use either all spot instances or all on-demand instances. If you choose the “All Spot” option, the pool will launch clusters with spot instances for all nodes,...
- 0 kudos
- 2722 Views
- 4 replies
- 2 kudos
Resolved! Setting up Unity Catalog in Azure
Trying to create a metastore that will be connected to an external storage (ADLS) but we don't have the option to create a new metastore in 'Catalog' tab in the UI. Based on some research, we see that we'll have to go into "Manage Account" and then c...
- 2722 Views
- 4 replies
- 2 kudos
- 2 kudos
I have been wrestling with this question for days now. I seem to be the only one with this question so I am sure I am doing something wrong. I am trying to create a UC metastore but there is not an option in "Catalog" to create a metastore. This s...
- 2 kudos
- 365 Views
- 1 replies
- 0 kudos
How to confirm a workspace ID via an api token?
Hello! We are integrating with Databricks and we get the API key, workspace ID, and host from our users in order to connect to Databricks. We need the to validate the workspace ID because we do need it outside of the context of the API key (with webh...
- 365 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @eheinlein, You can obtain the workspace ID from within a Databricks Notebook by running the following command in a Python or Scala cell: spark.conf.get("spark.databricks.clusterUsageTags.clusterOwnerOrgId") This command will return the worksp...
- 0 kudos
- 462 Views
- 1 replies
- 0 kudos
Failed deploying bundle via gitlab - Request failed for POST
I'm encountering an issue in my .gitlab-ci.yml file when attempting to execute databricks bundle deploy -t prod. The error message I receive is: Error: Request failed for POST <path>/state/deploy.lockInterestingly, when I run the same command locally...
- 462 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @samsonite, The error message you’re encountering seems related to a lock request. Ensure that the credentials and permissions used for deploying to the production environment are correct. Differences in credentials between local and CI/CD enviro...
- 0 kudos
- 11832 Views
- 7 replies
- 0 kudos
Renaming the database Name in Databricks
Team,Initially our team created the databases with the environment name appended. Ex: cust_dev, cust_qa, cust_prod.I am looking to standardize the database name as consistent name across environments. I want to rename to "cust". All of my tables are ...
- 11832 Views
- 7 replies
- 0 kudos
- 0 kudos
You can also use “CASCADE” to drop schema and tables as well. It is recursive.
- 0 kudos
- 413 Views
- 1 replies
- 0 kudos
Custom python package iin Notebook task using bundle
Hi mates!I'n my company, we are moving our pipelines to Databricks bundles, our pipelines use a notebook that receives some parameters.This notebook uses a custom python package to apply the business logic based on the parameters that receive.The thi...
- 413 Views
- 1 replies
- 0 kudos
- 431 Views
- 1 replies
- 0 kudos
Data Ingestion into DLT from Azure Event hub batch processing
I am building my first DLT pipeline and I want to ingest data from Azure event hub for batch processing.But, I can just see documentation for streaming by using kafka.Can we do batch processing with DLT & Azure Event hub?
- 431 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @pranay, Since Databricks Notebooks allow you to run Python code, you can leverage Python libraries to manipulate Excel files.Instead of using pywin32, consider using libraries like pandas or openpyxl to read, modify, and save Excel files.You can...
- 0 kudos
- 382 Views
- 1 replies
- 0 kudos
UC_COMMAND_NOT_SUPPORTED.WITHOUT_RECOMMENDATION in shared access mode?
I'm using a shared access cluster and am getting this error while trying to upload to Qdrant. #embeddings_df = embeddings_df.limit(5) options = { "qdrant_url": QDRANT_GRPC_URL, "api_key": QDRANT_API_KEY, "collection_name": QDRANT_COLLEC...
- 382 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @Awoke101, Shared access clusters in Databricks have certain restrictions due to Unity Catalog limitations.I recommend trying the single-user Unity Catalog cluster or configuring data persistence outside the container. Let me know if you need fur...
- 0 kudos
- 315 Views
- 1 replies
- 0 kudos
How does Datbricks differ from snowflake with respect to AI tooling
How does Datbricks differ from snowflake with respect to AI tooling
- 315 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @kellenpink, Databricks and Snowflake are both powerful platforms, but they excel in different areas: Databricks shines in big data processing, machine learning, and AI workloads. It provides an integrated environment for data enginee...
- 0 kudos
- 297 Views
- 1 replies
- 0 kudos
Hi, how can one go about assessing the value created due to the implementation of DBRX?
Hi, how can one go about assessing the value created due to the implementation of DBRX?
- 297 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @kellenpink, Assessing the value created by implementing Databricks (DBRX) involves several considerations. Here are some steps you can take: Business Objectives and KPIs: Start by defining clear business objectives for using Databricks. What ...
- 0 kudos
- 4644 Views
- 2 replies
- 0 kudos
numpy.ndarray size changed, may indicate binary incompatibility
Hi All,I have installed the following libraries on my cluster (11.3 LTS that includes Apache Spark 3.3.0, Scala 2.12):numpy==1.21.4flair==0.12 on executing `from flair.models import TextClassifier`, I get the following error:"numpy.ndarray size chan...
- 4644 Views
- 2 replies
- 0 kudos
- 0 kudos
You have changed the numpy version, and presumably that is not compatible with other libraries in the runtime. If flair requires later numpy, then use a later DBR runtime for best results, which already has later numpy versions
- 0 kudos
- 506 Views
- 1 replies
- 0 kudos
Databricks Asset Bundle Behaviour for new workflows and existing workflows
Dear Community Members -I am trying to deploy a workflow using DAB. After deploying if I am updating the same workflow with different bundle name it is creating a new workflow instead of updating the existing workflow. Also when I am trying to use sa...
- 506 Views
- 1 replies
- 0 kudos
- 0 kudos
@nicole_lu_PM : Do you have any suggestions or feedback for the above question ? It will be really helpful if we can get some insights.
- 0 kudos
- 620 Views
- 1 replies
- 0 kudos
Databricks asset bundles dependencies
Is anyone aware of a way to include a requirements.txt within the job definition of a databricks asset bundle? Documentation mentions how to have dependencies in workspace files, or Unity Catalog volumes, but I wanted to ask if it is possible to decl...
- 620 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @pdemeulenaer, When working with Databricks Asset Bundles, you can specify library dependencies for local development in the requirements*.txt file at the root of the bundle project. However, job task library dependencies are declared in your bund...
- 0 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
AI Summit
4 -
Azure
3 -
Azure databricks
3 -
Bi
1 -
Certification
1 -
Certification Voucher
2 -
Chatgpt
1 -
Community
7 -
Community Edition
3 -
Community Members
2 -
Community Social
1 -
Contest
1 -
Data + AI Summit
1 -
Data Engineering
1 -
Data Processing
1 -
Databricks Certification
1 -
Databricks Cluster
1 -
Databricks Community
11 -
Databricks community edition
3 -
Databricks Community Rewards Store
3 -
Databricks Lakehouse Platform
5 -
Databricks notebook
1 -
Databricks Office Hours
1 -
Databricks Runtime
1 -
Databricks SQL
4 -
Databricks-connect
1 -
DBFS
1 -
Dear Community
3 -
Delta
10 -
Delta Live Tables
1 -
Documentation
1 -
Exam
1 -
Featured Member Interview
1 -
HIPAA
1 -
Integration
1 -
LLM
1 -
Machine Learning
1 -
Notebook
1 -
Onboarding Trainings
1 -
Python
2 -
Rest API
11 -
Rewards Store
2 -
Serverless
1 -
Social Group
1 -
Spark
1 -
SQL
8 -
Summit22
1 -
Summit23
5 -
Training
1 -
Unity Catalog
4 -
Version
1 -
VOUCHER
1 -
WAVICLE
1 -
Weekly Release Notes
2 -
weeklyreleasenotesrecap
2 -
Workspace
1
- « Previous
- Next »