cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

arkiboys
by Contributor
  • 466 Views
  • 3 replies
  • 1 kudos

Resolved! reading mount points

Hello,Previously I was able to run the folowing command in databricks to see a list of the mount points but it seems the system does not accept this anymore as I get the following error.Any thoughts on how to get a list of the mount points?Thank youd...

  • 466 Views
  • 3 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @arkiboys, To retrieve a list of mount points in Azure Databricks, you can use the following methods: Using Databricks Utilities (dbutils): In a Python Notebook, execute the command dbutils.fs.mounts(). This will display all the mount points w...

  • 1 kudos
2 More Replies
Avvar2022
by New Contributor III
  • 839 Views
  • 4 replies
  • 1 kudos

Unity catalog enabled workspace -Is there any way to disable workflow/job creation for certain users

Currently in unity catalog enabled workspace users with "Workspace access" can create workflows/jobs, there is no access control available to restrict users from creating jobs/workflows.Use case: In production there is no need for users, data enginee...

  • 839 Views
  • 4 replies
  • 1 kudos
Latest Reply
c3
New Contributor II
  • 1 kudos

We have the "allow unrestricted cluster creation" box deselected for all groups and have users creating jobs in production so we are looking for a way to disable this.  I cannot believe this isn't an option. Did anyone find a solution for this?

  • 1 kudos
3 More Replies
DataYoga
by New Contributor
  • 1305 Views
  • 2 replies
  • 0 kudos

Informatica ETLs

I'm delving into the challenges of ETL transformations, particularly moving from traditional platforms like Informatica to Databricks. Given the complexity of legacy ETLs, I'm curious about the approaches others have taken to integrate these with Dat...

  • 1305 Views
  • 2 replies
  • 0 kudos
Latest Reply
owene
New Contributor II
  • 0 kudos

@Kaniz Do you know if Informatica Cloud Modernization can convert mappings into Delta Live Tables? Do we have to use Informatica Cloud for this, or can we use it as a one time migration and maintain the artifacts in DataBricks?Alternatively, we are l...

  • 0 kudos
1 More Replies
dhanshri
by New Contributor
  • 204 Views
  • 1 replies
  • 0 kudos

Tracking File Arrivals in Nested Folders Using Databricks File Arrival Trigger

Hi Team,I'm currently exploring a file arrival trigger with Data-bricks, but my data is organized into nested folders representing various sources. For instance: source1  |-- file1       |-- file.csv  |-- file2       |-- file.csv   My goal is to dete...

Community Discussions
Azure Databricks
Databricks
  • 204 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @dhanshri,  You can use file arrival triggers to automatically trigger a run of your Databricks job when new files arrive in your specified storage location.This feature is particularly useful when data arrives on an irregular schedule, making sch...

  • 0 kudos
Nandhini_Kumar
by New Contributor II
  • 338 Views
  • 2 replies
  • 0 kudos

How to identify Worker and Driver instance in AWS console for databricks instance?

For AWS databricks, i have configured 1 worker and 1 driver node with same node type. In the AWS console, all details are same for the two instance, as instance id only different. How to identify which instance id is for worker and which one is for d...

  • 338 Views
  • 2 replies
  • 0 kudos
Latest Reply
Nandhini_Kumar
New Contributor II
  • 0 kudos

Hi @Kaniz Thanks for your response.There is no instance pool ID while configured the cluster, then how will i able to differentiate then.Could you give any alternative way for finding the driver instance id and worker instance id in the AWS console. 

  • 0 kudos
1 More Replies
Ha2001
by New Contributor
  • 725 Views
  • 2 replies
  • 1 kudos

Databricks Repos API Limitations

Hi, I have started using databricks recently, and I'm not able find a right solution in the documentations. i have linked multiple repos in my databricks workspace in the repos folders, and I wanted to update the repos with remote AzureDevops reposit...

Community Discussions
azure devops
Databricks
REST API
  • 725 Views
  • 2 replies
  • 1 kudos
Latest Reply
Ayushi_Suthar
Honored Contributor
  • 1 kudos

Hi @Ha2001 , Good Day!  Databricks API has a limit of 10 per second for the /repos/* combined requests in the workspace. You can check the below documentation for the API limit:  https://docs.databricks.com/en/resources/limits.html#:~:text=Git%20fold...

  • 1 kudos
1 More Replies
jcozar
by Contributor
  • 1497 Views
  • 4 replies
  • 1 kudos

Resolved! Spark streaming query stops after code exception in notebook since 14.3

Hi!I am experiencing something that I cannot find in the documentation: in databricks, using the databricks runtime 13.X, when I start a streaming query (using .start method), it creates a new query and while it is running I can execute other code in...

  • 1497 Views
  • 4 replies
  • 1 kudos
Latest Reply
Lakshay
Esteemed Contributor
  • 1 kudos

You can use the help portal: https://help.databricks.com/s/

  • 1 kudos
3 More Replies
MohsenJ
by New Contributor III
  • 244 Views
  • 1 replies
  • 1 kudos

Resolved! How is model drift calculated when the baseline table has no timestamp column?

I try to understand how Databricks computes the model drift when the baseline table is available. What I understood from the documentation is Databricks processes both the primary and the baseline tables according to the specified granularities in th...

  • 244 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @MohsenJ, Let’s delve into how Databricks handles model drift calculation when the baseline table lacks a timestamp column. Baseline Table without Timestamp: When your baseline table doesn’t have a timestamp column, Databricks employs best-eff...

  • 1 kudos
AniP
by New Contributor II
  • 150 Views
  • 1 replies
  • 0 kudos

Unable to create workspace using quickstart method

Hi,I created first workspace successfully using AWS quickstart method.Now I want to create one more workspace, but quickstart method is not asking IAM role and bucket name and cloudformation stack is failing.

  • 150 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @AniP, It seems like you’re encountering some issues while creating another workspace using the AWS Quick Start method. Let’s troubleshoot this step by step: IAM Role: By default, an Amazon CloudFormation stack runs with the permissions of th...

  • 0 kudos
GlennStrycker2
by New Contributor II
  • 181 Views
  • 1 replies
  • 0 kudos

Why so many different domains and accounts?

I've lost count of how many different domains and accounts Databricks is requiring for me to use their services.  Every domain is requiring its own account username, password, etc., and nothing is synced.  I can't even keep track of which email addre...

  • 181 Views
  • 1 replies
  • 0 kudos
Latest Reply
GlennStrycker2
New Contributor II
  • 0 kudos

Pluscustomer-academy.databricks.comaccounts.cloud.databricks.comdatabricks.my.site.com

  • 0 kudos
HiraNisar
by New Contributor
  • 142 Views
  • 1 replies
  • 2 kudos

AutoML in production

I have a workflow in Databricks and an AutoML pipeline in it.I want to deploy that pipeline in production, but I want to use the shared cluster in production, since AutoML is not compatible with the shared clusters, what can be the workaround.(Is it ...

  • 142 Views
  • 1 replies
  • 2 kudos
Latest Reply
Kaniz
Community Manager
  • 2 kudos

Hi @HiraNisar ,Deploying an AutoML pipeline in production while utilizing a shared cluster can be a bit tricky, but there are some workarounds you can consider: Dedicated Cluster for AutoML: Create a dedicated single-user cluster specifically for...

  • 2 kudos
databricks0601
by New Contributor II
  • 148 Views
  • 1 replies
  • 0 kudos

Databricks certification Voucher code

Does anyone have a voucher code they do not intend to use and willing to share. I recently lost my job so hard time. I really want to utilize my learnings and give the exam. Thank you. 

  • 148 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @databricks0601, Hi there! Thank you for reaching out to the community. We understand that times are tough, and we want to support you in any way we can. To expedite your request for a voucher code, please list your concerns on our ticketing porta...

  • 0 kudos
yatharth
by New Contributor III
  • 97 Views
  • 1 replies
  • 0 kudos

Unable to build LZO-codec

Hi Community i am try to create lzo-codec in my dbfs using:https://docs.databricks.com/en/_extras/notebooks/source/init-lzo-compressed-files.htmlbut i am facing the errorCloning into 'hadoop-lzo'... The JAVA_HOME environment variable is not defined c...

  • 97 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @yatharth, It appears that you’re encountering an issue related to the LZO codec while working with Databricks and Hadoop. Let’s address this step by step: JAVA_HOME Environment Variable: The error message indicates that the JAVA_HOME environ...

  • 0 kudos
Leon_K
by New Contributor
  • 162 Views
  • 1 replies
  • 0 kudos

How to Add value to Comment Column in Databricks Catalog View for Foreign table (Azure SQL)

Hi.Struggling to add description by script to the comment column within the catalog view in Databricks, particularly for foreign/external tables sourced from Azure SQL.I have no issue doing that for delta tables. Also for information schema columns f...

  • 162 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Leon_K, Adding comments to columns within the catalog view in Databricks can be quite useful for documentation and understanding your data. While it’s straightforward for Delta tables, handling foreign/external tables sourced from Azure SQL requi...

  • 0 kudos
Pbr
by New Contributor
  • 239 Views
  • 1 replies
  • 0 kudos

How to save a catalog table as a spark or pandas dataframe?

HelloI have a table in my catalog, and I want to have it as a pandas or spark df. I was using this code to do that before, but I don't know what is happened recently that the code is not working anymore. from pyspark.sql import SparkSession spark = S...

  • 239 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Pbr, To work around this, you can create a temporary view using SQL in a separate cell (e.g., a %%sql cell) and then reference that view from your Python or Scala code. Here’s how you can achieve this: First, create a temporary view for your tab...

  • 0 kudos