cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

himoshi
by New Contributor II
  • 562 Views
  • 0 replies
  • 0 kudos

Clarification on overwriting in Unity Catalog

Hello, While reviewing Unity Catalog to better understand its limitations, I came across the following statement:Overwrite mode for DataFrame write operations into Unity Catalog is supported only for Delta tables, not for other file formats. The user...

  • 562 Views
  • 0 replies
  • 0 kudos
Cloud_Architect
by New Contributor III
  • 457 Views
  • 1 replies
  • 0 kudos

Need help calculating the cost benefits of switching from interactive to job cluster

I need help calculating the cost benefits of switching from interactive to job cluster. Can you help me get some formulas on how to calculate the cost differences in Databricks?

  • 457 Views
  • 1 replies
  • 0 kudos
Latest Reply
jacovangelder
Honored Contributor
  • 0 kudos

Assuming you're on Azure (otherwise use the AWS/GCP equivalent), did you try the Azure cost calculator? https://azure.microsoft.com/en-us/pricing/details/databricks/Question to ask yourself to get more specific: Do you have an idea how much DBU's you...

  • 0 kudos
MadCowTM
by New Contributor II
  • 1817 Views
  • 1 replies
  • 2 kudos

Resolved! get_json_object and json path filtering

I have following string [{"key":"abc","value":{"string_value":"abc123"}},{"key":"def","value":{"int_value":123}},{"key":"ghi","value":{"string_value":"ghi456"}}] and from that string i need to extract key.value.string_value for key with the value equ...

  • 1817 Views
  • 1 replies
  • 2 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 2 kudos

Can you try with the below code snippet WITH exploded_json AS ( SELECT explode(from_json( '[{"key":"abc","value":{"string_value":"abc123"}},{"key":"def","value":{"int_value":123}},{"key":"ghi","value":{"string_value":"ghi456"}}]', 'array<s...

  • 2 kudos
unity_Catalog
by New Contributor III
  • 647 Views
  • 0 replies
  • 0 kudos

UCX installation error

I am getting the below error while Installing UCX. But Installation is done in the workspace.I have admin privileges on the workspace. The below error suggests to check token or URL of workspace.They are provided correctly.Then why below error is sho...

  • 647 Views
  • 0 replies
  • 0 kudos
Rapha
by New Contributor II
  • 1204 Views
  • 4 replies
  • 0 kudos

Error when cloning repository

Hi all,cloning a devops repo worked like a charm every time, but now I get a weird error, that I do not understandWhat container is meant here and why would I need one?Thanks and regards,Raphael

Rapha_0-1717503863381.png
  • 1204 Views
  • 4 replies
  • 0 kudos
Latest Reply
Rapha
New Contributor II
  • 0 kudos

@Yeshwanth I am sorry, I thought I replied to you on friday. I cannot upload a .har file. I get the error message:The file type (.har) is not supported. Valid file types are: jpg, gif, png, pdf.

  • 0 kudos
3 More Replies
jamesdavids
by New Contributor
  • 1079 Views
  • 1 replies
  • 0 kudos

Spark Excel Library Insufficient Privileges

HiWe have a shared access mode cluster in which we have installed a maven library for reading excel files into a Spark DataFrame. When using an account with admin rights everything works fine, however when we run it as a standar user we always get `o...

jamesdavids_0-1717747232287.png
  • 1079 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Hello James, this issue seems to be related to shared cluster limitations, as per docs: Libraries used as JDBC drivers or custom Spark data sources on Unity Catalog-enabled shared compute require ANY FILE permissions.Moving to a single user cluster m...

  • 0 kudos
JameDavi_51481
by New Contributor III
  • 827 Views
  • 1 replies
  • 0 kudos

How to escape column comments when adding them programmatically

I would like to add comments to all of our columns, programmatically. The only way I can find to do this is through SQL DDL - e.g. `alter table sometable alter column somecolumn comment 'some comment string`. However, I want to read this comment stri...

  • 827 Views
  • 1 replies
  • 0 kudos
Latest Reply
brockb
Databricks Employee
  • 0 kudos

Hi @JameDavi_51481 ,I do not believe this is supported currently, please see this line in the Parameter Marker documentation: > You must not reference a parameter marker in a DDL statement...Ref: https://docs.databricks.com/en/sql/language-manual/sql...

  • 0 kudos
amit-agarwal4
by New Contributor II
  • 1072 Views
  • 2 replies
  • 0 kudos

Resolved! how to clone a table using Column and row level mask

I am working on use case to clone a table which has column and row level masking implemented.Can i clone the table, based on documentation it is not possible. what are the other alternatives i have ?

  • 1072 Views
  • 2 replies
  • 0 kudos
Latest Reply
raphaelblg
Databricks Employee
  • 0 kudos

@amit-agarwal4 At the current moment, I'm not aware of any other way than disabling the RLS.     

  • 0 kudos
1 More Replies
tejas8196
by New Contributor II
  • 1251 Views
  • 2 replies
  • 0 kudos

DAB not updating zone_id when redeployed

Hey folks,Facing an issue with zone_id not getting overridden when redeploying the DAB template to Databricks workspace.The Databricks job is already deployed and has "ap-south-1a" zone_id. I wanted to make it "auto" so, I have made the changes to th...

Screenshot 2024-06-05 at 12.40.57 AM.png
  • 1251 Views
  • 2 replies
  • 0 kudos
Latest Reply
davicd2658
New Contributor II
  • 0 kudos

Hello,Thanks for the info I will try to figure it out for more.

  • 0 kudos
1 More Replies
JamesY
by New Contributor III
  • 2035 Views
  • 6 replies
  • 0 kudos

Resolved! Retrieve error column/row when writing to sqlmi

Databricks notebook, Scala, .read() .write(), source data csv. Got this error when trying to write data to sqlmi. I understand the error indicated one of the source column value's length exceeded the database column's value length. But the message is...

Community Platform Discussions
Databricks
Scala
SqlMi
  • 2035 Views
  • 6 replies
  • 0 kudos
Latest Reply
brockb
Databricks Employee
  • 0 kudos

Thanks JamesY, I'm not familiar with the limitations of the SQL Server `nvarchar` data type but is there a way that we can filter out the rows that will fail using spark such as: spark.read.format("csv").option("header", "true").load("/path/to/csvs")...

  • 0 kudos
5 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors