- 1580 Views
- 6 replies
- 0 kudos
Hi, originally I accidentally made a customer academy account with my company that is a databricks partner. Then I made an account using my personal email and listed my company email as the partner email for the partner academy account. that account ...
- 1580 Views
- 6 replies
- 0 kudos
Latest Reply
Hi @Maria_fed Thanks again, i have assigned your case to my colleague and you should hearing from them soon.
Regards,
Akshay
5 More Replies
- 613 Views
- 2 replies
- 1 kudos
Hi,Following is the code i am using the ingest the data incrementally (weekly).val ssdf = spark.readStream.schema(schema) .format("cloudFiles").option("cloudFiles.format", "parquet").load(sourceUrl).filter(criteriaFilter)val transformedDf = ssdf.tran...
- 613 Views
- 2 replies
- 1 kudos
Latest Reply
Danny is another process mutating / deleting the incoming files?
1 More Replies
by
Phani1
• Valued Contributor
- 506 Views
- 1 replies
- 0 kudos
Could you please share us best practices on implementation of RBAC, Security & Privacy controls in Databricks
- 506 Views
- 1 replies
- 0 kudos
Latest Reply
Hi, Could you please check on https://docs.databricks.com/en/lakehouse-architecture/security-compliance-and-privacy/best-practices.html and see if this helping? Also, please tag @Debayan with your next comment which will notify me. Thanks!
- 415 Views
- 1 replies
- 0 kudos
Hello Everyone, I have build this script in order to collect ganglia metrics but the size of stderr and sdtout ganglia is 0. It doesn't work. I Have put this script on Workspace due to migration databricks all init-script should be place on Workspace...
- 415 Views
- 1 replies
- 0 kudos
Latest Reply
Hi, Is there any error you are getting? Also, please tag @Debayan with your next comment which will notify me. Thanks!
- 465 Views
- 2 replies
- 0 kudos
I've been working on obtaining DDL at the schema level in Hive Metastore within GCP-hosted Databricks. I've implemented a Python code that generates SQL files in the dbfs/temp directory. However, when running the code, I'm encountering a "file path n...
- 465 Views
- 2 replies
- 0 kudos
Latest Reply
Hi, the error code snippet with the whole error may help to determine the issue, also, considering the above points may also work as a fix.
1 More Replies
- 657 Views
- 2 replies
- 0 kudos
Hi Everyone, am unable to see the permission button in sql warehouse to provide access to other users.I have admin rights and databricks is premium subscription.
- 657 Views
- 2 replies
- 0 kudos
Latest Reply
Hi, Could you please provide a screenshot of the SQL warehouse? Also, you can go through: https://docs.databricks.com/en/security/auth-authz/access-control/sql-endpoint-acl.htmlAlso, please tag @Debayan with your next comment which will notify me. Th...
1 More Replies
- 333 Views
- 1 replies
- 1 kudos
Hi,Anyone knows how I'm able to monitor cost of the SQL Serverless? I'm using Databricks in Azure and I'm not sure where to find cost generated by compute resources hosted on Databricks.
- 333 Views
- 1 replies
- 1 kudos
Latest Reply
Hi, You can calculate the pricing in https://www.databricks.com/product/pricing/databricks-sql also, https://azure.microsoft.com/en-in/pricing/details/databricks/#:~:text=Sign%20in%20to%20the%20Azure,asked%20questions%20about%20Azure%20pricing. For A...
- 310 Views
- 1 replies
- 0 kudos
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<s3-bucket-name>" ] }, { "Effect": "Allow", "Action": [ "s3:Pu...
- 310 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @Cryptocurentcyc, The version in the given JSON is "Privilege Model version 1.0". The statement in the JSON is about upgrading to Privilege Model version 1.0 to take advantage of privilege inheritance and new features. It also highlights the diffe...
by
Bagger
• New Contributor II
- 1366 Views
- 2 replies
- 0 kudos
Hi,We need to monitor Databricks jobs and we have made a setup where are able to get the prometheus metrics, however, we are lagging an overview of which metrics refer to what.Namely, we need to monitor the following:failed jobs : is a job failedtabl...
- 1366 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @Bagger, To monitor the metrics you specified, you can use a combination of Databricks features and Prometheus:
1. **Failed Jobs:** You can monitor failed jobs using Databricks’ built-in job monitoring capabilities. The status of each job run, inc...
1 More Replies
- 1025 Views
- 1 replies
- 0 kudos
Hi, I am using the latest version of pyspark and I am trying to connect to a remote cluster with runtime 13.3.My doubts are:- Do i need databricks unity catalog enabled?- My cluster is already in a Shared policy in Access Mode, so what other configur...
- 1025 Views
- 1 replies
- 0 kudos
Latest Reply
Hi, Is your workspace is already unity catalog enabled? Also, did you go through the considerations for enabling workspace for unity catalog? https://docs.databricks.com/en/data-governance/unity-catalog/enable-workspaces.html#considerations-before-yo...
- 3663 Views
- 6 replies
- 1 kudos
Hi Team,Getting below error while creating a table with primary key,"Table constraints are only supported in Unity Catalog."Table script : CREATE TABLE persons(first_name STRING NOT NULL, last_name STRING NOT NULL, nickname STRING,CONSTRAINT persons_...
- 3663 Views
- 6 replies
- 1 kudos
Latest Reply
Hi, this needs further investigation, could you please raise a support case with Databricks?
5 More Replies
- 718 Views
- 1 replies
- 0 kudos
I try to start cluster that i used to start it 7 times before and it gave me this error Cloud provider is undergoing a transient resource throttling. This is retryable. 1 out of 2 pods scheduled. Failed to launch cluster in kubernetes in 1800 seconds...
- 718 Views
- 1 replies
- 0 kudos
Latest Reply
Hi, This error "GCE out of resources" typically means that Google compute engine is out of resources as in out of nodes (can be a quota issue or can be node issues in that particular region in GCP). Could you please raise a google support case on thi...
- 503 Views
- 1 replies
- 0 kudos
I try to start cluster that i used to start it 7 times before and it gave me this error Cloud provider is undergoing a transient resource throttling. This is retryable. 1 out of 2 pods scheduled. Failed to launch cluster in kubernetes in 1800 seconds...
- 503 Views
- 1 replies
- 0 kudos
Latest Reply
Hi, This error "GCE out of resources" typically means that Google compute engine is out of resources as in out of nodes (can be a quota issue or can be node issues in that particular region in GCP). Could you please raise a google support case on thi...
- 1100 Views
- 1 replies
- 1 kudos
Does the new feature 'Run If' that allows you to run tasks conditionally lack the 'ALWAYS' option? In order to execute the task both when there is OK and error from the dependencies
- 1100 Views
- 1 replies
- 1 kudos
Latest Reply
You can choose the All Done option to run the task in both the scenarios
by
kp12
• New Contributor II
- 2409 Views
- 3 replies
- 1 kudos
Hello,I'm following instructions in this article to connect to ADLS gen2 using Azure service principal. I can access service principal's app id and secret via Databricks key vault backed secret scope. However, this doesn't work for directory-id and I...
- 2409 Views
- 3 replies
- 1 kudos
Latest Reply
Hi @Kaniz , Thanks for the prompt reply. As per the document, the syntax is the text highlighted in red below for accessing keys from secret scope in spark config. I used the same for app id too and that works. But I if use the same syntax for tenant...
2 More Replies