- 4966 Views
- 1 replies
- 0 kudos
Support for managed identity based authentication in python kafka client
We followed this document https://docs.databricks.com/aws/en/connect/streaming/kafka?language=Python#msk-aad to use Kafka client to read events from our event hub for a feature.As part of the SFI, the guidance is to move away from client secret and u...
- 4966 Views
- 1 replies
- 0 kudos
- 0 kudos
Currently, Databricks does not support using Managed Identities directly for Kafka client authentication (e.g., MSK IAM or Event Hubs Kafka endpoint) in Python Structured Streaming connections. However, there is a supported and secure alternative tha...
- 0 kudos
- 3213 Views
- 2 replies
- 0 kudos
Data bricks is not mounting with storage account giving java lang exception error 480
Hi Everyone,I am currently facing an issue with in our Test Environment where Data bricks is not able to mount with the storage account and we are using the same mount in other environments those are Dev,Preprod and Prod and it works fine there witho...
- 3213 Views
- 2 replies
- 0 kudos
- 0 kudos
This issue in your Test environment, where Databricks fails to mount an Azure Storage account with the error java.lang.Exception: 480, is most likely related to expired credentials or cached authentication tokens, even though the same configuration w...
- 0 kudos
- 2465 Views
- 2 replies
- 1 kudos
Cannot import editable installed module in notebook
Hi,I have the following directory structure:- mypkg/ - setup.py - mypkg/ - __init__.py - module.py - scripts/ - main # notebook From the `main` notebok I have a cell that runs:%pip install -e /path/to/mypkgThis command appears to succ...
- 2465 Views
- 2 replies
- 1 kudos
- 1 kudos
Hey @newenglander — always great to meet a fellow New Englander Could you share a bit more detail about your setup? For example, are you running on classic compute or serverless? And are you working in a customer workspace, or using Databricks Free ...
- 1 kudos
- 7869 Views
- 5 replies
- 1 kudos
Spatial Queries
Hi,I'm trying to execute the following code:%sqlSELECT LSOA21CD, ST_X(ST_GeomFromWKB(Geom_Varbinary)) AS STX, ST_Y(ST_GeomFromWKB(Geom_Varbinary)) AS STYFROM ordnance_survey_lsoas_december_2021_population_weighted_centroidsWHERE LSOA21CD ...
- 7869 Views
- 5 replies
- 1 kudos
- 1 kudos
@Corar You might want to enable that explicitly by setting 'spark.databricks.geo.st.enabled' configuration to value 'true'.
- 1 kudos
- 941 Views
- 6 replies
- 0 kudos
Getting [08S01/500593] Can't connect to database - [Databricks][JDBCDriver](500593) Communication
I am getting below error connecting a databricks instance using JDBC driver .ERROR: [08S01/500593] Can't connect to database - [Databricks][JDBCDriver](500593) Communication link failure. Failed to connect to server. Reason: HTTP Response code: 401, ...
- 941 Views
- 6 replies
- 0 kudos
- 0 kudos
I am trying to connect Databricks from Mainframe z/OS using JDBC driver and using below IBM Java version java version "11.0.26" 2025-01-21IBM Semeru Runtime Certified Edition for z/OS 11.0.26.0 (build 11.0.26+4)IBM J9 VM 11.0.26.0 (build z/OS-Release...
- 0 kudos
- 248 Views
- 5 replies
- 1 kudos
Resolved! External MCP representing user data permissions
Hello Community!I am writing to you with a question and hope that you will help me to find the right approach.I am building AI Enterprise System and the organization store the data on Data Bricks. To access the given data, you have to raise a request...
- 248 Views
- 5 replies
- 1 kudos
- 1 kudos
Ignore for now you have MCP Server.The problem you are trying to solve1) An AI Agent needs to access data inside Databricks 2) The agent need to operate at the user's permissionsThere are muliple paths1) Directly using OAuth/HTTPhttps://docs.databric...
- 1 kudos
- 1663 Views
- 1 replies
- 1 kudos
CREATE Community_User_Group [IF NOT EXISTS] IN MADRID(SPAIN)
Hi,I would like to get some support in creating a Community User Group in Madrid, Spain. It would be nice to host events/meetings/discussions ...Regards,Ángel
- 1663 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi Ángel,I see your post is from quite some time ago, but I wanted to say that I’d also love to see a Databricks User Group here in Madrid.Although I’m not new to Databricks, I haven’t really taken much advantage of the community so far due to lack o...
- 1 kudos
- 121 Views
- 1 replies
- 0 kudos
How to Optimize Spark Jobs in Databricks for Large-Scale Geospatial Data Processing?
I’m currently analyzing a large geospatial dataset focused on Michigan county boundaries and map data, and I’m using Apache Spark on Databricks to process and transform millions of records.Even though I’ve optimized basic things like repartitioning, ...
- 121 Views
- 1 replies
- 0 kudos
- 0 kudos
I do not have experience with geospatial data on databricks.But I do know that since a while, Sedona can be installed on Databricks.Sedona is created for large-scale geospatial data processing. Sounds like something for you no?https://sedona.apache....
- 0 kudos
- 11975 Views
- 16 replies
- 3 kudos
Is it possible to view Databricks cluster metrics using REST API
I am looking for some help on getting databricks cluster metrics such as memory utilization, CPU utilization, memory swap utilization, free file system using REST API.I am trying it in postman using databricks token and with my Service Principal bear...
- 11975 Views
- 16 replies
- 3 kudos
- 3 kudos
Is there any solution found to get cpu, memory metrics for Hive meta store backed workloads ? We are not using UC. So can't use system tables
- 3 kudos
- 12424 Views
- 4 replies
- 0 kudos
Resolved! Unable to add column comment on a View. Any way to update comments on multiple columns in bulk?
I noticed that unlike "Alter Table" there is no "Alter View" command to add comment on a column in the existing view. This is a regular view created on Tables (and not Materialized view). If the underlying table column has comment then the View inh...
- 12424 Views
- 4 replies
- 0 kudos
- 0 kudos
Use COMMENT ONCOMMENT ON | Databricks on AWS
- 0 kudos
- 5410 Views
- 2 replies
- 0 kudos
Improve query performance of direct query with Databricks
I’m building a dashboard in Power BI’s Pro Workspace, connecting data via Direct Query from Databricks (around 60 million rows from 15 combined tables), using a SQL Serverless (small size and 4 clusters).The problem is that the dashboard is taking to...
- 5410 Views
- 2 replies
- 0 kudos
- 0 kudos
@viniciuscini have you managed to get it working well for you?
- 0 kudos
- 795 Views
- 7 replies
- 15 kudos
Unity catalogues - What would you do
If you were creating Unity Catalogs again, what would you do differently based on your past experience?
- 795 Views
- 7 replies
- 15 kudos
- 15 kudos
@nayan_wylde no don't do that hehe. It was example of extreme approach. Usually use catalog to separate environment + in enterprises to separate divisions like customer tower, marketing tower, finance tower etc
- 15 kudos
- 473 Views
- 3 replies
- 2 kudos
Resolved! How to reduce data loss for Delta Lake on Azure when failing from primary to secondary regions?
Let’s say we have big data application where data loss is not an option.Having GZRS (geo-zone-redundant storage) redundancy we would achieve zero data loss if primary region is alive – writer is waiting for acks from two or more Azure availability zo...
- 473 Views
- 3 replies
- 2 kudos
- 2 kudos
Databricks is working on improvements and new functionality related to that. For now, the only solution is a DEEP CLONE. You can run it more frequently or implement your own replication based on a change data feed. You could use delta sharing for tha...
- 2 kudos
- 268 Views
- 2 replies
- 1 kudos
Delta comparison architecture using flatMapGroupsWithState in Structured Streaming
I am designing structured streaming job in Azure data bricks(using Scala) which will consume messages from two event hubs, lets call them source and target.I would like your feedback on below flow, whether it is will survive the production load and ...
- 268 Views
- 2 replies
- 1 kudos
- 1 kudos
It is hard to understand what the source is and what the target is. Some charts could be useful. Also, information on how long the state is kept. My solution usually is:- Use declarative lakeflow pipelines if possible (dlt) - if not, consider handlin...
- 1 kudos
- 710 Views
- 6 replies
- 4 kudos
Resolved! Databricks partner Tech Summit FY26 access
I'm trying to access the recordings of Partner Tech Summit FY26 which happened a month back. It says lobby is closed.Is there any other way i can access the recordings. I'm yet to watch the day 2 sessions.
- 710 Views
- 6 replies
- 4 kudos
- 4 kudos
Hi @saurabh18cs , check link shared by @Advika . Make sure you are logged in using partner account.Link - https://partner-academy.databricks.com/learn/catalog/view/168SS:
- 4 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
1 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
3 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
Api Calls
1 -
API Documentation
3 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
3 -
Auto-loader
1 -
Autoloader
4 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
6 -
Azure data disk
1 -
Azure databricks
14 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Cluster Init Script
1 -
Community Edition
3 -
Community Event
1 -
Community Group
2 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
Custom Python
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineer Associate
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Data Processing
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
3 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
3 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks Model
1 -
Databricks notebook
2 -
Databricks Notebooks
3 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
3 -
Databricks-connect
1 -
databricks_cluster_policy
1 -
DatabricksJobCluster
1 -
DataCleanroom
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
Hubert Dudek
2 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
3 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Migration
1 -
ML Model
2 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
OpenAI
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
serving endpoint
1 -
Session
1 -
Sign Up Issues
2 -
Software Development
1 -
Spark Connect
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
1 -
terraform
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
| User | Count |
|---|---|
| 133 | |
| 116 | |
| 56 | |
| 42 | |
| 34 |