cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Avvar2022
by New Contributor III
  • 1124 Views
  • 4 replies
  • 1 kudos

Unity catalog enabled workspace -Is there any way to disable workflow/job creation for certain users

Currently in unity catalog enabled workspace users with "Workspace access" can create workflows/jobs, there is no access control available to restrict users from creating jobs/workflows.Use case: In production there is no need for users, data enginee...

  • 1124 Views
  • 4 replies
  • 1 kudos
Latest Reply
c3
New Contributor II
  • 1 kudos

We have the "allow unrestricted cluster creation" box deselected for all groups and have users creating jobs in production so we are looking for a way to disable this.  I cannot believe this isn't an option. Did anyone find a solution for this?

  • 1 kudos
3 More Replies
DataYoga
by New Contributor
  • 1375 Views
  • 2 replies
  • 0 kudos

Informatica ETLs

I'm delving into the challenges of ETL transformations, particularly moving from traditional platforms like Informatica to Databricks. Given the complexity of legacy ETLs, I'm curious about the approaches others have taken to integrate these with Dat...

  • 1375 Views
  • 2 replies
  • 0 kudos
Latest Reply
owene
New Contributor II
  • 0 kudos

@Kaniz Do you know if Informatica Cloud Modernization can convert mappings into Delta Live Tables? Do we have to use Informatica Cloud for this, or can we use it as a one time migration and maintain the artifacts in DataBricks?Alternatively, we are l...

  • 0 kudos
1 More Replies
Nandhini_Kumar
by New Contributor III
  • 429 Views
  • 2 replies
  • 0 kudos

How to identify Worker and Driver instance in AWS console for databricks instance?

For AWS databricks, i have configured 1 worker and 1 driver node with same node type. In the AWS console, all details are same for the two instance, as instance id only different. How to identify which instance id is for worker and which one is for d...

  • 429 Views
  • 2 replies
  • 0 kudos
Latest Reply
Nandhini_Kumar
New Contributor III
  • 0 kudos

Hi @Kaniz Thanks for your response.There is no instance pool ID while configured the cluster, then how will i able to differentiate then.Could you give any alternative way for finding the driver instance id and worker instance id in the AWS console. 

  • 0 kudos
1 More Replies
Ha2001
by New Contributor
  • 821 Views
  • 2 replies
  • 1 kudos

Databricks Repos API Limitations

Hi, I have started using databricks recently, and I'm not able find a right solution in the documentations. i have linked multiple repos in my databricks workspace in the repos folders, and I wanted to update the repos with remote AzureDevops reposit...

Community Discussions
azure devops
Databricks
REST API
  • 821 Views
  • 2 replies
  • 1 kudos
Latest Reply
Ayushi_Suthar
Honored Contributor
  • 1 kudos

Hi @Ha2001 , Good Day!  Databricks API has a limit of 10 per second for the /repos/* combined requests in the workspace. You can check the below documentation for the API limit:  https://docs.databricks.com/en/resources/limits.html#:~:text=Git%20fold...

  • 1 kudos
1 More Replies
jcozar
by Contributor
  • 1718 Views
  • 4 replies
  • 1 kudos

Resolved! Spark streaming query stops after code exception in notebook since 14.3

Hi!I am experiencing something that I cannot find in the documentation: in databricks, using the databricks runtime 13.X, when I start a streaming query (using .start method), it creates a new query and while it is running I can execute other code in...

  • 1718 Views
  • 4 replies
  • 1 kudos
Latest Reply
Lakshay
Esteemed Contributor
  • 1 kudos

You can use the help portal: https://help.databricks.com/s/

  • 1 kudos
3 More Replies
MohsenJ
by New Contributor III
  • 335 Views
  • 1 replies
  • 1 kudos

Resolved! How is model drift calculated when the baseline table has no timestamp column?

I try to understand how Databricks computes the model drift when the baseline table is available. What I understood from the documentation is Databricks processes both the primary and the baseline tables according to the specified granularities in th...

  • 335 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @MohsenJ, Let’s delve into how Databricks handles model drift calculation when the baseline table lacks a timestamp column. Baseline Table without Timestamp: When your baseline table doesn’t have a timestamp column, Databricks employs best-eff...

  • 1 kudos
AniP
by New Contributor II
  • 191 Views
  • 1 replies
  • 0 kudos

Unable to create workspace using quickstart method

Hi,I created first workspace successfully using AWS quickstart method.Now I want to create one more workspace, but quickstart method is not asking IAM role and bucket name and cloudformation stack is failing.

  • 191 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @AniP, It seems like you’re encountering some issues while creating another workspace using the AWS Quick Start method. Let’s troubleshoot this step by step: IAM Role: By default, an Amazon CloudFormation stack runs with the permissions of th...

  • 0 kudos
GlennStrycker2
by New Contributor II
  • 274 Views
  • 1 replies
  • 0 kudos

Why so many different domains and accounts?

I've lost count of how many different domains and accounts Databricks is requiring for me to use their services.  Every domain is requiring its own account username, password, etc., and nothing is synced.  I can't even keep track of which email addre...

  • 274 Views
  • 1 replies
  • 0 kudos
Latest Reply
GlennStrycker2
New Contributor II
  • 0 kudos

Pluscustomer-academy.databricks.comaccounts.cloud.databricks.comdatabricks.my.site.com

  • 0 kudos
HiraNisar
by New Contributor
  • 217 Views
  • 1 replies
  • 2 kudos

AutoML in production

I have a workflow in Databricks and an AutoML pipeline in it.I want to deploy that pipeline in production, but I want to use the shared cluster in production, since AutoML is not compatible with the shared clusters, what can be the workaround.(Is it ...

  • 217 Views
  • 1 replies
  • 2 kudos
Latest Reply
Kaniz
Community Manager
  • 2 kudos

Hi @HiraNisar ,Deploying an AutoML pipeline in production while utilizing a shared cluster can be a bit tricky, but there are some workarounds you can consider: Dedicated Cluster for AutoML: Create a dedicated single-user cluster specifically for...

  • 2 kudos
databricks0601
by New Contributor II
  • 249 Views
  • 1 replies
  • 0 kudos

Databricks certification Voucher code

Does anyone have a voucher code they do not intend to use and willing to share. I recently lost my job so hard time. I really want to utilize my learnings and give the exam. Thank you. 

  • 249 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @databricks0601, Hi there! Thank you for reaching out to the community. We understand that times are tough, and we want to support you in any way we can. To expedite your request for a voucher code, please list your concerns on our ticketing porta...

  • 0 kudos
yatharth
by New Contributor III
  • 157 Views
  • 1 replies
  • 0 kudos

Unable to build LZO-codec

Hi Community i am try to create lzo-codec in my dbfs using:https://docs.databricks.com/en/_extras/notebooks/source/init-lzo-compressed-files.htmlbut i am facing the errorCloning into 'hadoop-lzo'... The JAVA_HOME environment variable is not defined c...

  • 157 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @yatharth, It appears that you’re encountering an issue related to the LZO codec while working with Databricks and Hadoop. Let’s address this step by step: JAVA_HOME Environment Variable: The error message indicates that the JAVA_HOME environ...

  • 0 kudos
Leon_K
by New Contributor
  • 343 Views
  • 1 replies
  • 0 kudos

How to Add value to Comment Column in Databricks Catalog View for Foreign table (Azure SQL)

Hi.Struggling to add description by script to the comment column within the catalog view in Databricks, particularly for foreign/external tables sourced from Azure SQL.I have no issue doing that for delta tables. Also for information schema columns f...

  • 343 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Leon_K, Adding comments to columns within the catalog view in Databricks can be quite useful for documentation and understanding your data. While it’s straightforward for Delta tables, handling foreign/external tables sourced from Azure SQL requi...

  • 0 kudos
Pbr
by New Contributor
  • 360 Views
  • 1 replies
  • 0 kudos

How to save a catalog table as a spark or pandas dataframe?

HelloI have a table in my catalog, and I want to have it as a pandas or spark df. I was using this code to do that before, but I don't know what is happened recently that the code is not working anymore. from pyspark.sql import SparkSession spark = S...

  • 360 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Pbr, To work around this, you can create a temporary view using SQL in a separate cell (e.g., a %%sql cell) and then reference that view from your Python or Scala code. Here’s how you can achieve this: First, create a temporary view for your tab...

  • 0 kudos
Kinger
by New Contributor
  • 940 Views
  • 2 replies
  • 0 kudos

Associating a Git Credential with a Service Principal using Terraform Provider (AWS)

I am attempting to create a Databrick Repo in a workspace via Terraform. I would like the Repo and the associated Git Credential to be associated with a Service Principal. In my initial run, the Terraform provider is associated with the user defined ...

  • 940 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Kinger, The Databricks Terraform provider does not support creating a git credential associated with a Service Principal (SP) and associating the SP with the Repo creation.• However, you can create a Service Principal and associate it with a git ...

  • 0 kudos
1 More Replies
augustsc
by New Contributor II
  • 808 Views
  • 6 replies
  • 0 kudos

Running dbt in a Databricks task, the dbt_output from the Databricks jobs api is empty

I'm running a scheduled workflow with a dbt task on Azure Databricks. We want to export the dbt-output from the dbt task to a storage container for our Slim CI setup and data observability. The issue is, that the Databricks API (/api/2.1/jobs/runs/ge...

  • 808 Views
  • 6 replies
  • 0 kudos
Latest Reply
d_strahl
New Contributor II
  • 0 kudos

We're having the same issue, I get the output from a task but not the dbt_output. We're running 13.3 LTS and dbt 1.7.11

  • 0 kudos
5 More Replies