- 759 Views
- 1 replies
- 0 kudos
Hi AllI am getting this error for some jobs. Can you please let me know what could be the reasonRun result unavailable: job failed with an error message -Run result unavailable: job failed with error messageUnexpected failure while waiting for the cl...
- 759 Views
- 1 replies
- 0 kudos
Latest Reply
This is an issue on the cloud level so try to put retries in the job as it happens not for all cluster start , it may fails once but will start after retry,Also, raise a databricks ticket , they will provide permanent solution
- 959 Views
- 1 replies
- 0 kudos
What is the difference between ADLS mounted ON DataBricks and dbfs does the Mount of ADLS on databricks make gives any performance benefit , is the mounted ADLS still behave as object storage or it become simple storage
- 959 Views
- 1 replies
- 0 kudos
Latest Reply
DBFS is just an abstraction on cloud storage By default when you create a workspace, you get an instance of DBFS - so-called DBFS Root. Plus you can mount additional storage accounts under the /mnt folder. Data written to mount point paths (/mnt) is...
- 7050 Views
- 4 replies
- 0 kudos
how to pass arguments and variables to databricks python activity from azure data factory
- 7050 Views
- 4 replies
- 0 kudos
Latest Reply
try importing argv from sys. Then if you have the parameter added correctly in DataFactory you could get it in your python script typing argv[1] (index 0 is the file path).
3 More Replies
- 11863 Views
- 5 replies
- 0 kudos
I'm working with Databricks notebook backed by spark cluster. Having trouble trying to connect to the Azure blob storage. I used this link and tried the section Access Azure Blob Storage Directly - Set up an account access key. I get no errors here:s...
- 11863 Views
- 5 replies
- 0 kudos
Latest Reply
I have been facing the same problem over and over. Now trying to follow what's written here (https://docs.databricks.com/data/data-sources/azure/azure-storage.html#access-azure-blob-storage-directly), but always getting "shaded.databricks.org.apache...
4 More Replies
- 20959 Views
- 6 replies
- 4 kudos
I am trying to set retrieve a secret from Azure Key Vault as follows:
sqlPassword = dbutils.secrets.get(scope = "Admin", key = "SqlPassword")
The scope has been created correctly, but I receive the following error message:
com.databricks.common.clie...
- 20959 Views
- 6 replies
- 4 kudos
Latest Reply
Sometimes turning it off and on again is underrated, so I gave up finding the problem, deleted it and re-created the scope - worked a breeze!Mine seems like it was something silly, I was able to set up my vault but got the same issue when trying to ...
5 More Replies
- 9058 Views
- 3 replies
- 1 kudos
I have no idea from where to start
- 9058 Views
- 3 replies
- 1 kudos
Latest Reply
Thanks for you help @leedabee. I will go through second option, First one is not applicable in my case.
2 More Replies
- 5941 Views
- 5 replies
- 0 kudos
I have connected my S3 bucket from databricks.
Using the following command :
import urllib
import urllib.parse
ACCESS_KEY = "Test"
SECRET_KEY = "Test"
ENCODED_SECRET_KEY = urllib.parse.quote(SECRET_KEY, "") AWS_BUCKET_NAME = "Test" MOUNT_NAME = "...
- 5941 Views
- 5 replies
- 0 kudos
Latest Reply
Hi @akj2784,Please go through Databricks documentation on working with files in S3,https://docs.databricks.com/spark/latest/data-sources/aws/amazon-s3.html#mount-s3-buckets-with-dbfs
4 More Replies
by
Yogi
• New Contributor III
- 6942 Views
- 15 replies
- 0 kudos
Hi,
Can anyone help me with Databricks and Azure function.
I'm trying to pass databricks json output to azure function body in ADF job, is it possible?
If yes, How?
If No, what other alternative to do the same?
- 6942 Views
- 15 replies
- 0 kudos
Latest Reply
You can now pass values back to ADF from a notebook.@@Yogi​ Though there is a size limit, so if you are passing dataset of larger than 2MB then rather write it on storage, and consume it directly with Azure Functions. You can pass the file path/ refe...
14 More Replies
- 8655 Views
- 12 replies
- 0 kudos
Hi,I have files hosted on an Azure Data Lake Store which I can connect from Azure Databricks configured as per instructions here.I can read JSON files fine, however, I'm getting the following error when I try to read an Avro file.spark.read.format("c...
- 8655 Views
- 12 replies
- 0 kudos
Latest Reply
Taras's answer is correct. Because spark-avro is based on the RDD APIs, the properties must be set in the hadoopConfiguration options.
Please note these docs for configuration using the RDD API: https://docs.azuredatabricks.net/spark/latest/data-sou...
11 More Replies
- 10718 Views
- 2 replies
- 0 kudos
It happens that I am manipulating some data using Azure Databricks. Such data is in an Azure Data Lake Storage Gen1. I mounted the data into DBFS, but now, after transforming the data I would like to write it back into my data lake.
To mount the dat...
- 10718 Views
- 2 replies
- 0 kudos
Latest Reply
I am new in Azure Data Bricks..and I am trying to write the Data frame in mounted ADLS file. But in below command
dfGPS.write.mode("overwrite").format("com.databricks.spark.csv").option("header","true").csv("/mnt/<mount-name>")
1 More Replies