- 4050 Views
- 3 replies
- 0 kudos
unable to install pymqi in azure databricks
Hi,I am trying to install pymqi via below command:pip install pymqi However, I am getting below error message:Python interpreter will be restarted. Collecting pymqi Using cached pymqi-1.12.10.tar.gz (91 kB) Installing build dependencies: started Inst...
- 4050 Views
- 3 replies
- 0 kudos
- 0 kudos
I don't think so, because it won't be specific to Databricks - this is all a property of the third party packages. And, there are billions of possible library conflicts. But this is not an example of a package conflict. It's an example of not complet...
- 0 kudos
- 5780 Views
- 1 replies
- 1 kudos
Resolved! Configure job to use one cluster instance to multiple jobs
Hi! I have several tiny jobs that run in parallel and I want them to run on the same cluster:- Tasks type Python Script: I send the parameters this way to run the pyspark scripts.- Job compute cluster created as (copied JSON from Databricks Job UI)Ho...
- 5780 Views
- 1 replies
- 1 kudos
- 1 kudos
Unfortunately, running multiple jobs in parallel using a single job cluster is not supported (yet). New in databricks is the possibility to create a job that orchestrates multiple jobs. These jobs will however still use their own cluster (configurati...
- 1 kudos
- 1349 Views
- 1 replies
- 1 kudos
Is there a solution that we can display the worker types based on spark version selection using api?
Is there a solution that allows us to display the worker types or driver types based on the selection of Spark version using an api?
- 1349 Views
- 1 replies
- 1 kudos
- 1 kudos
Can you clarify what you mean? Worker and driver types are not related to Spark version.
- 1 kudos
- 2933 Views
- 2 replies
- 2 kudos
Resolved! Reduce EBS Default Volumes
By default Databricks creates 2 volumes: one with 30GB and the other one with 150GB. We have a lot of nodes in our pools and so a los of Terabytes of Volumes, but we are not making any use of them in the jobs. Is there any way to reduce the volumes? ...
- 2933 Views
- 2 replies
- 2 kudos
- 2 kudos
Yes, EBS vols are essential for shuffle spill for example. You are probably using them!
- 2 kudos
- 7319 Views
- 1 replies
- 0 kudos
Uninstalling a preinstalled python package from Databricks
[Datasets](https://pypi.org/project/datasets/) python package comes preinstalled on databricks clusters. I want to uninstall it or completely prevent it's installation when I create/start a cluster.I couldn't find any solution on stackoverflow.And I ...
- 7319 Views
- 1 replies
- 0 kudos
- 0 kudos
@Retired_mod note that you can't actually uninstall packages in the runtime with pip.
- 0 kudos
- 13256 Views
- 1 replies
- 0 kudos
Databricks cluster launch time
Hi Team,We have an @adf pipeline which will run some set of activities before #Azure databricks notebooks get called.As and when the notebooks are called our pipeline will launch a new cluster for every job with job compute as Standard F4 with a sing...
- 13256 Views
- 1 replies
- 0 kudos
- 2973 Views
- 1 replies
- 0 kudos
The job run failed because task dependency types are temporarily disabled
I am trying the recently released conditional tasks (https://docs.databricks.com/en/workflows/jobs/conditional-tasks.html). I have created a workflow where the leaf task depends on multiple tasks and its run_if property is set as AT_LEAST_ONE_SUCCESS...
- 2973 Views
- 1 replies
- 0 kudos
- 2807 Views
- 0 replies
- 0 kudos
change cloud provider from AWS to GOOGLE
I registered a Databricks account and selected using AWS as cloud provider, may I know how to change it to using Google? Thanks!
- 2807 Views
- 0 replies
- 0 kudos
- 5225 Views
- 2 replies
- 2 kudos
Resolved! com.databricks.NotebookExecutionException: FAILED
I am running the comparisons but I get an error, I am working from a databricks notebook.Could someone help me to solve the following error:com.databricks.WorkflowException: com.databricks.NotebookExecutionException: FAILED: Notebook not found: /user...
- 5225 Views
- 2 replies
- 2 kudos
- 2 kudos
two things that come to mind:1. the notebook resides on another path than '/users/cuenta_user/user/Tests'2. the notebook is not saved as a notebook but rather as an ordinary python file
- 2 kudos
- 2170 Views
- 0 replies
- 0 kudos
Databricks Assistant HIPPA? Future Cost?
With the Public Preview of Databricks Assistant, I have a few questions. 1) If the Azure Tenet is HIPPA compliant does that compliance also include the Databricks Assistant features? 2) Right now the product is free but what will the cost be? Will we...
- 2170 Views
- 0 replies
- 0 kudos
- 3189 Views
- 3 replies
- 1 kudos
Liquid Clustering
Hi Team,Could you please help us understand,1)Performance benchmarks of liquid clustering compared to z-order and partition.2)How much cost it incurs/saves compared to z-order and partitionRegards,Phanindra
- 3189 Views
- 3 replies
- 1 kudos
- 1 kudos
Hi @Phani1 ,You can find performance related benchmarking here : https://www.databricks.com/blog/announcing-delta-lake-30-new-universal-format-and-liquid-clustering
- 1 kudos
- 1976 Views
- 0 replies
- 0 kudos
Intermittent (cert) failure when connecting to AWS RDS
I've just upgraded a bunch of jobs to 12.2 LTS runtime and now getting intermittent failures with the following message:```java.sql.SQLException: [Amazon](600000) Error setting/closing connection: PKIX path building failed: sun.security.provider.cert...
- 1976 Views
- 0 replies
- 0 kudos
- 11193 Views
- 0 replies
- 0 kudos
what are the different types life_cycle_state in databricks for job cluster
We are trying to get cluster life_cycle_state using API and we are able to get various values as belowRUNNINGPENDINGTERMINATEDINTERNAL_ERRORIs there any other values apart from above values it would be a great help.
- 11193 Views
- 0 replies
- 0 kudos
- 2317 Views
- 1 replies
- 2 kudos
create UDF in pyspark
Hi, Need the help of this community, unfortunately creating udfs is not my strongest skill set.I need to create UDF that will join two tables together, the problem is that one table has two id columns Name Table has id1 and id2 Transaction Table has ...
- 2317 Views
- 1 replies
- 2 kudos
- 2 kudos
Hi,I am not sure if I understand your question directly but let me give it a try:- The constraint is if id2 in name table populated then join with id2: So I think you can also could first make a column called 'id' in which you get id2 if it is popula...
- 2 kudos
- 9483 Views
- 3 replies
- 1 kudos
Want to disable cell scrollers.
There are two scrollers visible in my notebook, 1 for cell and another is for notebook. How can i disable cell scroller sicne i am having a hard time to navigate to my code scrolling the cell every time.
- 9483 Views
- 3 replies
- 1 kudos
- 1 kudos
Hi @Mahajan What exactly do you mean by disabling the cell scroll ? If and all there is an option as such, then it basically means you can't scroll the cell at all and the cell view is fixed. This makes the cell redundant as at any given point of ti...
- 1 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
1 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
3 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
Api Calls
1 -
API Documentation
3 -
App
1 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
3 -
Auto-loader
1 -
Autoloader
4 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
6 -
Azure data disk
1 -
Azure databricks
14 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
2 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Cluster Init Script
1 -
Community Edition
3 -
Community Event
1 -
Community Group
2 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
Custom Python
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineer Associate
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Data Processing
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks App
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
3 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
4 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks Model
1 -
Databricks notebook
2 -
Databricks Notebooks
3 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
3 -
Databricks-connect
1 -
databricks_cluster_policy
1 -
DatabricksJobCluster
1 -
DataCleanroom
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
Hubert Dudek
2 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
3 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
meetup
1 -
Migration
1 -
ML Model
2 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Monitoring
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
OpenAI
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
serving endpoint
1 -
Session
1 -
Sign Up Issues
2 -
Software Development
1 -
Spark Connect
1 -
Spark scala
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
2 -
terraform
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
| User | Count |
|---|---|
| 133 | |
| 119 | |
| 57 | |
| 42 | |
| 34 |