cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Carpender
by New Contributor II
  • 501 Views
  • 2 replies
  • 1 kudos

PowerBI Tips

Does anyone have any tips for using PowerBI on top of databricks? Any best practices you know of or roadblocks you have run into that should be avoided?Thanks.

  • 501 Views
  • 2 replies
  • 1 kudos
Latest Reply
artsheiko
Valued Contributor III
  • 1 kudos

Hey, Use Partner Connect to establish a connection to PBI Consider to use Databricks SQL Serverless warehouses for the best user experience and performance (see Intelligent Workload Management aka auto-scaling and query queuing, remote result cache, ...

  • 1 kudos
1 More Replies
Databricks_info
by New Contributor II
  • 611 Views
  • 4 replies
  • 0 kudos

Concurrent Update to Delta - Throws error

Team,I get a ConcurrentAppendException: Files were added to the root of the table by a concurrent update when trying to update a table which executes via jobs with for each activity in ADF,I tried with Databricks run time 14.x and set the delete vect...

  • 611 Views
  • 4 replies
  • 0 kudos
Latest Reply
artsheiko
Valued Contributor III
  • 0 kudos

Hey, This issue happens whenever two or more jobs try to write to the same partition for a table. This exception is often thrown during concurrent DELETE, UPDATE, or MERGE operations. While the concurrent operations may be physically updating differe...

  • 0 kudos
3 More Replies
RahulChaubey
by New Contributor III
  • 317 Views
  • 1 replies
  • 1 kudos

Resolved! Can we get SQL Serverless warehouses monitoring data using APIs or logs ?

I am looking for a possible way to get the autoscaling history data for SQL Serverless Warehouses using API or logs.I want something like what we see in monitoring UI.   

Screenshot 2024-04-15 at 6.52.29 PM.png
  • 317 Views
  • 1 replies
  • 1 kudos
Latest Reply
artsheiko
Valued Contributor III
  • 1 kudos

Hi Rahul, you need to perform two actions :  Enable system tables schema named "compute" (how-to, take a look on the page, it's highly possible that you'll find other schemas useful too)Explore system.compute.warehouse_events table Hope this helps. B...

  • 1 kudos
kankotan
by New Contributor II
  • 800 Views
  • 1 replies
  • 3 kudos

Regional Group Request for Istanbul

Hello, I kindly request the formation of a regional group for Istanbul/Turkey. I would appreciate your assistance in this matter.Thank you,Can

Community Discussions
Istanbul
Request
Turkey
  • 800 Views
  • 1 replies
  • 3 kudos
Latest Reply
Sujitha
Community Manager
  • 3 kudos

@kankotan Happy to help set it up for you. I have dropped an email for more information! 

  • 3 kudos
NhanNguyen
by Contributor II
  • 376 Views
  • 2 replies
  • 0 kudos

Method public void org.apache.spark.sql.internal.CatalogImpl.clearCache() is not whitelisted on clas

I'm executing a notebook and failed with this error: Sometime, when i execute some function in spark and also failed with the error 'this class is not whitelist'. Could everyone help me check on this?Thanks for your help!

Screenshot 2024-04-03 at 21.49.46.png
  • 376 Views
  • 2 replies
  • 0 kudos
Latest Reply
NhanNguyen
Contributor II
  • 0 kudos

Thanks for your feedback, actually my cluster is shared cluster, and then I change to single cluster then can run that method/

  • 0 kudos
1 More Replies
jx1226
by New Contributor II
  • 2001 Views
  • 2 replies
  • 0 kudos

How to access storage with private endpoint

We know that Databricks with VNET injection (our own VNET) allows is to connect to blob storage/ ADLS Gen2 over private endpoints and peering. This is what we typically do.We have a client who created Databricks with EnableNoPublicIP=No (secure clust...

  • 2001 Views
  • 2 replies
  • 0 kudos
Latest Reply
IvoDB
New Contributor II
  • 0 kudos

Hey @jx1226 , were you able to solve this at the customer? I am currently struggling with same issues here.

  • 0 kudos
1 More Replies
nikhilmb
by New Contributor II
  • 577 Views
  • 4 replies
  • 0 kudos

Error ingesting zip files: ExecutorLostFailure Reason: Command exited with code 50

Hi,We are trying to ingest zip files into Azure Databricks delta lake using COPY INTO command. There are 100+ zip files with average size of ~300MB each.Cluster configuration:1 driver: 56GB, 16 cores2-8 workers: 32GB, 8 cores (each). Autoscaling enab...

  • 577 Views
  • 4 replies
  • 0 kudos
Latest Reply
nikhilmb
New Contributor II
  • 0 kudos

Although we were able to copy the zip files onto the DB volume, we were not able to share them with any system outside of the Databricks environment. Guess delta sharing does not support sharing files that are on UC volumes.

  • 0 kudos
3 More Replies
MohsenJ
by New Contributor III
  • 552 Views
  • 1 replies
  • 0 kudos

the inference table table doesn't get updated

I setup a model serving endpoint and created a monitoring dashboard to monitor its performance. The problem is my inference table doesn't get updated by model serving endpoints. To test the endpoint I use the following codeimport random import time ...

  • 552 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @MohsenJ,  The log shows several reconfiguration errors related to the logger configuration. These errors are likely due to missing or incorrect configuration settings. Here are some steps to troubleshoot: Check Log Configuration: Verify that the...

  • 0 kudos
Chrispy
by New Contributor
  • 726 Views
  • 1 replies
  • 0 kudos

Spark Handling White Space as NULL

I have a very strange thing happening.  I'm importing a csv file and nulls and blanks are being interpreted correctly.  What is strange is that a column that regularly has a single space character value is having the single space converted to null.I'...

  • 726 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Chrispy,  Handling Empty Cells as NULLs: When importing data from a CSV file, you want to treat empty cells as NULL values. This is a common requirement, especially when dealing with databases. Let’s explore a couple of approaches to achieve ...

  • 0 kudos
Nandhini_Kumar
by New Contributor III
  • 527 Views
  • 1 replies
  • 0 kudos

How the Scale up process done in the databricks cluster?

For my AWS databricks cluster, i configured shared computer with 1min worker node and 3 max worker node, initailly only one worker node and driver node instance is created in the AWS console. Is there any rule set by databricks for scale up the next ...

  • 527 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Nandhini_Kumar,    Cluster Configuration: When you create a Databricks cluster, you have several options for compute configuration. These choices impact performance, cost, and scalability.Two primary types of computing are available: All-purp...

  • 0 kudos
Azsdc
by New Contributor
  • 334 Views
  • 1 replies
  • 0 kudos

Usage of if else condition for data check

Hi,In a particular Workflows Job, I am trying to add some data checks in between each task by using If else statement. I used following statement in a notebook to call parameter in if else condition to check logic.{"job_id": XXXXX,"notebook_params": ...

  • 334 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Azsdc, In Databricks Jobs, you can use conditional logic to control task execution. Let’s break down how you can achieve this: Using Parameters in If/Else Conditions: To define a parameter for use in an If/Else condition within a job, follow...

  • 0 kudos
sharma_kamal
by New Contributor III
  • 1663 Views
  • 3 replies
  • 4 kudos

Resolved! Unable to read available file after downloading

After downloading a file using `wget`, I'm attempting to read it by spark.read.json.I am getting error: PATH_NOT_FOUND - Path does not exist: dbfs:/tmp/data.json. SQLSTATE: 42K03File <command-3327713714752929>, line 2 I have checked the file do exist...

Community Discussions
dbfs
json
ls
wget
  • 1663 Views
  • 3 replies
  • 4 kudos
Latest Reply
Ayushi_Suthar
Honored Contributor
  • 4 kudos

Hi @sharma_kamal , Good Day!  Could you please try the below code suggested by @ThomazRossito , it will help you.  Also please refer to the below document to work with the files on Databricks:  https://docs.databricks.com/en/files/index.html Please l...

  • 4 kudos
2 More Replies
arkiboys
by Contributor
  • 1177 Views
  • 3 replies
  • 1 kudos

Resolved! reading mount points

Hello,Previously I was able to run the folowing command in databricks to see a list of the mount points but it seems the system does not accept this anymore as I get the following error.Any thoughts on how to get a list of the mount points?Thank youd...

  • 1177 Views
  • 3 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @arkiboys, To retrieve a list of mount points in Azure Databricks, you can use the following methods: Using Databricks Utilities (dbutils): In a Python Notebook, execute the command dbutils.fs.mounts(). This will display all the mount points w...

  • 1 kudos
2 More Replies
Avvar2022
by New Contributor III
  • 1235 Views
  • 4 replies
  • 1 kudos

Unity catalog enabled workspace -Is there any way to disable workflow/job creation for certain users

Currently in unity catalog enabled workspace users with "Workspace access" can create workflows/jobs, there is no access control available to restrict users from creating jobs/workflows.Use case: In production there is no need for users, data enginee...

  • 1235 Views
  • 4 replies
  • 1 kudos
Latest Reply
c3
New Contributor II
  • 1 kudos

We have the "allow unrestricted cluster creation" box deselected for all groups and have users creating jobs in production so we are looking for a way to disable this.  I cannot believe this isn't an option. Did anyone find a solution for this?

  • 1 kudos
3 More Replies