- 838 Views
- 3 replies
- 1 kudos
During middle of the exam I got suspended. It said due to my eye movement. I had the test on left part of my monitor and pdf (which was provided as a testing aid for this exam) on right side. I was just moving my eyes left and right as I was using PD...
- 838 Views
- 3 replies
- 1 kudos
- 5456 Views
- 11 replies
- 0 kudos
Hi All,I am facing an issue of data getting missed.I am reading the data from azure event hub and after flattening the json data I am storing it in a parquet file and then using another databricks notebook to perform the merge operations on my delta ...
- 5456 Views
- 11 replies
- 0 kudos
Latest Reply
- In the EventHub, you can preview the event hub job using Azure Analitycs, so please first check are all records there- Please set in Databricks that it is saved directly to the bronze delta table without performing any aggregation, just 1 to 1, and...
10 More Replies
- 832 Views
- 1 replies
- 1 kudos
Hi everyone,I'm currently in the process of migrating to Unity Catalog. I have several Azure Databricks Workspaces, one for each phase of the development phase (development, test, acceptance, and production). In accordance with the best practices (ht...
- 832 Views
- 1 replies
- 1 kudos
Latest Reply
you could also store the environment name in a config file f.e. in the databricks filestore.These config files can also be managed by ci/cd.tbh my preferred way of working lately.
- 3169 Views
- 5 replies
- 4 kudos
I am new to real time scenarios and I need to create a spark structured streaming jobs in databricks. I am trying to apply some rule based validations from backend configurations on each incoming JSON message. I need to do the following actions on th...
- 3169 Views
- 5 replies
- 4 kudos
Latest Reply
Were you able to achieve any solutions if yes please can you help with it.
4 More Replies
- 4871 Views
- 9 replies
- 8 kudos
I wanted to ask this Q yesterday in the Q&A session with Mohan Mathews, but didn't get around to it (@Kaniz Fatma​ do you know his handle here so I can tag him?)We (and most development teams) have two environments: UAT/DEV and PROD. For those that d...
- 4871 Views
- 9 replies
- 8 kudos
Latest Reply
Hi @Oliver Angelil​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Tha...
8 More Replies
- 629 Views
- 2 replies
- 2 kudos
Hello Community members,I am looking for options for redirecting the Databricks notebook raised except within exception block to be redirected to ServiceNowIs there a way the connection can be made directly from the notebook?Looking for suggestions. ...
- 629 Views
- 2 replies
- 2 kudos
Latest Reply
Thank you for the solution, I will definitely try this and share to the community if this works.
1 More Replies
- 1258 Views
- 4 replies
- 4 kudos
When defining the databricks_job resource in Terraform , we are trying to enable Job Queueing flag for the job. However, from the Terraform Provider docs, we are not able to find any config related to queuing. Is there a different method to configure...
- 1258 Views
- 4 replies
- 4 kudos
Latest Reply
Hi @adivandhya, it’s a private preview feature - you need to work with your account SA for that.
3 More Replies
- 4991 Views
- 3 replies
- 2 kudos
Hi community,i get an analysis exception when executing following code in a notebook using a personal compute cluster. Seems to be an issue with permission but I am logged in with my admin account. Any help would be appreciated. USE CATALOG catalog;
...
- 4991 Views
- 3 replies
- 2 kudos
Latest Reply
I was having the same issue because I was trying to set the location with the absolute path, just like you did.I solved it by creating an external location, then copying the URL and putting it into the location of the path options.
2 More Replies
- 1903 Views
- 3 replies
- 3 kudos
Dear Databricks Community,I am performing three consecutive 'append' writes to a delta table, whereas the first append creates the table. Each append consists of two rows, which are ordered by column 'id' (see example in the attached screenshot). Whe...
- 1903 Views
- 3 replies
- 3 kudos
Latest Reply
Thanks a lot @Lakshay and @Tharun-Kumar for your valued contributions!
2 More Replies
- 1019 Views
- 2 replies
- 1 kudos
The Databricks on AWS docs claim that 30G + 150G EBS drives are mounter to every node by default. But if I use instance type like r5d.2xlarge, it already has local disk so I want to avoid mounting the 150G EBS drive to it. Is there a way to do it?We ...
- 1019 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @ivanychev, Based on the provided information, if you want to avoid mounting the 150G EBS drive to a node with the local disk, you can set ebs_volume_count it to 0 in the Clusters API when creating the cluster. Another option could be manually det...
1 More Replies
by
bearys
• New Contributor II
- 1446 Views
- 2 replies
- 2 kudos
I have a large delta table partitioned by an identifier column that I now have discovered has blank spaces in some of the identifiers, e.g. one partition can be defined by "Identifier=first identifier". Most partitions does not have these blank space...
- 1446 Views
- 2 replies
- 2 kudos
Latest Reply
Hi @bearys, The error message suggests an illegal character in the path at a specific index.
The error is pointing to a blank space in the path "dbfs:/mnt/container/table_name/Identifier=first identifier/part-01347-8a9a157b-6d0d-75dd-b1b7-2aed12e057...
1 More Replies
- 782 Views
- 2 replies
- 3 kudos
Hello Team we have frequently data bricks job failure with following message , any help would be appreciated Job aborted due to stage failure. Relative path in absolute URI
- 782 Views
- 2 replies
- 3 kudos
Latest Reply
@DB_PROD_Molina One of the reasons this error shows up is due to file path/name containing special characters in it. If that is the case, could you rename your file to have the special characters removed.
1 More Replies
- 663 Views
- 0 replies
- 0 kudos
If you have AWS CloudWatch subscribed to write out logs to AWS Kinesis, the Kinesis stream is base64 encoded and the CloudWatch logs are GZIP compressed. The challenge we faced was how to address that in pyspark to be able to read the data. We were ...
- 663 Views
- 0 replies
- 0 kudos
by
DaniW
• New Contributor III
- 2435 Views
- 3 replies
- 3 kudos
Hello, if i run this code: %sqlCREATE OR REPLACE VIEW esprosilver.xxx.encuestas_talleresASSELECT * FROM CSV.`abfss://landing@esproanalyticscenterdl.dfs.core.windows.net/oracle-dwh/encuestas_talleres/encuestas_talleres.csv` It creates the view in unit...
- 2435 Views
- 3 replies
- 3 kudos
Latest Reply
DaniW
New Contributor III
I forgot to mention that the csv delimiter is ';'
2 More Replies
- 6138 Views
- 5 replies
- 3 kudos
Connecting to Databricks using OpenJDK 17 I got the exception below. Are there any plans to fix the driver for OpenJDK17?java.sql.SQLException: [Databricks][DatabricksJDBCDriver](500540) Error caught in BackgroundFetcher. Foreground thread ID: 44. Ba...
- 6138 Views
- 5 replies
- 3 kudos
Latest Reply
I still see the above error with databricks jdbc driver 2.6.33. Anyone aware of fix available either in driver or java?
4 More Replies