cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Hubert-Dudek
by Esteemed Contributor III
  • 9 Views
  • 0 replies
  • 0 kudos

The purpose of your All-Purpose Cluster

Small, hidden but useful cluster setting.You can set that no jobs are allowed on the all-purpose cluster.Or vice versa, you can set an all-purpose cluster that can be used only by jobs. read more: - https://databrickster.medium.com/purpose-for-your-...

no_jobs_cluster.png
  • 9 Views
  • 0 replies
  • 0 kudos
Kruthika
by New Contributor
  • 4850 Views
  • 1 replies
  • 0 kudos

Support for managed identity based authentication in python kafka client

We followed this document https://docs.databricks.com/aws/en/connect/streaming/kafka?language=Python#msk-aad to use Kafka client to read events from our event hub for a feature.As part of the SFI, the guidance is to move away from client secret and u...

  • 4850 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Currently, Databricks does not support using Managed Identities directly for Kafka client authentication (e.g., MSK IAM or Event Hubs Kafka endpoint) in Python Structured Streaming connections. However, there is a supported and secure alternative tha...

  • 0 kudos
Sarathk
by New Contributor
  • 3129 Views
  • 2 replies
  • 0 kudos

Data bricks is not mounting with storage account giving java lang exception error 480

Hi Everyone,I am currently facing an issue with in our Test Environment where Data bricks is not able to mount with the storage account and we are using the same mount in other environments those are Dev,Preprod and Prod and it works fine there witho...

  • 3129 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

This issue in your Test environment, where Databricks fails to mount an Azure Storage account with the error java.lang.Exception: 480, is most likely related to expired credentials or cached authentication tokens, even though the same configuration w...

  • 0 kudos
1 More Replies
newenglander
by New Contributor II
  • 2336 Views
  • 2 replies
  • 1 kudos

Cannot import editable installed module in notebook

Hi,I have the following directory structure:- mypkg/ - setup.py - mypkg/ - __init__.py - module.py - scripts/ - main # notebook From the `main` notebok I have a cell that runs:%pip install -e /path/to/mypkgThis command appears to succ...

  • 2336 Views
  • 2 replies
  • 1 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 1 kudos

Hey @newenglander — always great to meet a fellow New Englander Could you share a bit more detail about your setup? For example, are you running on classic compute or serverless? And are you working in a customer workspace, or using Databricks Free ...

  • 1 kudos
1 More Replies
GMB
by New Contributor II
  • 7786 Views
  • 5 replies
  • 1 kudos

Spatial Queries

Hi,I'm trying to execute the following code:%sqlSELECT LSOA21CD,       ST_X(ST_GeomFromWKB(Geom_Varbinary)) AS STX,       ST_Y(ST_GeomFromWKB(Geom_Varbinary)) AS STYFROM ordnance_survey_lsoas_december_2021_population_weighted_centroidsWHERE LSOA21CD ...

  • 7786 Views
  • 5 replies
  • 1 kudos
Latest Reply
ivan-kurchenko
New Contributor II
  • 1 kudos

@Corar You might want to enable that explicitly by setting 'spark.databricks.geo.st.enabled' configuration to value 'true'. 

  • 1 kudos
4 More Replies
Saubhik
by New Contributor III
  • 856 Views
  • 6 replies
  • 0 kudos

Getting [08S01/500593] Can't connect to database - [Databricks][JDBCDriver](500593) Communication

I am getting below error connecting a databricks instance using JDBC driver .ERROR: [08S01/500593] Can't connect to database - [Databricks][JDBCDriver](500593) Communication link failure. Failed to connect to server. Reason: HTTP Response code: 401, ...

  • 856 Views
  • 6 replies
  • 0 kudos
Latest Reply
Saubhik
New Contributor III
  • 0 kudos

I am trying to connect Databricks from Mainframe z/OS using JDBC driver and using below IBM Java version java version "11.0.26" 2025-01-21IBM Semeru Runtime Certified Edition for z/OS 11.0.26.0 (build 11.0.26+4)IBM J9 VM 11.0.26.0 (build z/OS-Release...

  • 0 kudos
5 More Replies
maikel
by New Contributor
  • 149 Views
  • 5 replies
  • 0 kudos

External MCP representing user data permissions

Hello Community!I am writing to you with a question and hope that you will help me to find the right approach.I am building AI Enterprise System and the organization store the data on Data Bricks. To access the given data, you have to raise a request...

  • 149 Views
  • 5 replies
  • 0 kudos
Latest Reply
smithsonian
New Contributor
  • 0 kudos

Ignore for now you have MCP Server.The problem you are trying to solve1) An AI Agent needs to access data inside Databricks 2) The agent need to operate at the user's permissionsThere are muliple paths1) Directly using OAuth/HTTPhttps://docs.databric...

  • 0 kudos
4 More Replies
__angel__
by New Contributor III
  • 1621 Views
  • 1 replies
  • 1 kudos

CREATE Community_User_Group [IF NOT EXISTS] IN MADRID(SPAIN)

Hi,I would like to get some support in creating a Community User Group in Madrid, Spain. It would be nice to host events/meetings/discussions ...Regards,Ángel

  • 1621 Views
  • 1 replies
  • 1 kudos
Latest Reply
anastasia_lc
New Contributor II
  • 1 kudos

Hi Ángel,I see your post is from quite some time ago, but I wanted to say that I’d also love to see a Databricks User Group here in Madrid.Although I’m not new to Databricks, I haven’t really taken much advantage of the community so far due to lack o...

  • 1 kudos
kristym
by New Contributor
  • 84 Views
  • 1 replies
  • 0 kudos

How to Optimize Spark Jobs in Databricks for Large-Scale Geospatial Data Processing?

I’m currently analyzing a large geospatial dataset focused on Michigan county boundaries and map data, and I’m using Apache Spark on Databricks to process and transform millions of records.Even though I’ve optimized basic things like repartitioning, ...

  • 84 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

I do not have experience with geospatial data on databricks.But I do know that since a while, Sedona can be installed on Databricks.Sedona is created for large-scale geospatial data processing.  Sounds like something for you no?https://sedona.apache....

  • 0 kudos
Prashanthkumar
by New Contributor III
  • 11780 Views
  • 16 replies
  • 3 kudos

Is it possible to view Databricks cluster metrics using REST API

I am looking for some help on getting databricks cluster metrics such as memory utilization, CPU utilization, memory swap utilization, free file system using REST API.I am trying it in postman using databricks token and with my Service Principal bear...

Prashanthkumar_0-1705104529507.png
  • 11780 Views
  • 16 replies
  • 3 kudos
Latest Reply
ajayIG
New Contributor II
  • 3 kudos

Is there any solution found to get cpu, memory metrics for Hive meta store backed workloads ? We are not using UC. So can't use system tables 

  • 3 kudos
15 More Replies
prasad_vaze
by New Contributor III
  • 12248 Views
  • 4 replies
  • 0 kudos

Resolved! Unable to add column comment on a View. Any way to update comments on multiple columns in bulk?

I noticed that unlike "Alter Table" there is no "Alter View"  command to add comment on a column in the existing view. This is a regular view created on Tables (and not Materialized view).  If the underlying table column has comment then the View inh...

Get Started Discussions
Data Governance
Unity Catalog
  • 12248 Views
  • 4 replies
  • 0 kudos
Latest Reply
Pinei
New Contributor II
  • 0 kudos

Use COMMENT ONCOMMENT ON | Databricks on AWS

  • 0 kudos
3 More Replies
viniciuscini
by New Contributor
  • 5331 Views
  • 2 replies
  • 0 kudos

Improve query performance of direct query with Databricks

I’m building a dashboard in Power BI’s Pro Workspace, connecting data via Direct Query from Databricks (around 60 million rows from 15 combined tables), using a SQL Serverless (small size and 4 clusters).The problem is that the dashboard is taking to...

  • 5331 Views
  • 2 replies
  • 0 kudos
Latest Reply
ArekKemp
New Contributor II
  • 0 kudos

@viniciuscini have you managed to get it working well for you?

  • 0 kudos
1 More Replies
Rezakorehi
by New Contributor II
  • 716 Views
  • 7 replies
  • 15 kudos

Unity catalogues - What would you do

If you were creating Unity Catalogs again, what would you do differently based on your past experience?

  • 716 Views
  • 7 replies
  • 15 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 15 kudos

@nayan_wylde no don't do that hehe. It was example of extreme approach. Usually use catalog to separate environment + in enterprises to separate divisions like customer tower, marketing tower, finance tower etc

  • 15 kudos
6 More Replies
YuriS
by New Contributor II
  • 429 Views
  • 3 replies
  • 2 kudos

Resolved! How to reduce data loss for Delta Lake on Azure when failing from primary to secondary regions?

Let’s say we have big data application where data loss is not an option.Having GZRS (geo-zone-redundant storage) redundancy we would achieve zero data loss if primary region is alive – writer is waiting for acks from two or more Azure availability zo...

  • 429 Views
  • 3 replies
  • 2 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 2 kudos

Databricks is working on improvements and new functionality related to that. For now, the only solution is a DEEP CLONE. You can run it more frequently or implement your own replication based on a change data feed. You could use delta sharing for tha...

  • 2 kudos
2 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels