- 7599 Views
- 4 replies
- 1 kudos
So. Since I would run a git_source as a notebook_task inside a databricks Job, I read that it's possible to forward to the notebook_task (and of course now to git_source) a bunch of parameters via the `base_parameters` field on Rest API.But, on my gi...
- 7599 Views
- 4 replies
- 1 kudos
Latest Reply
The way I was able to fix, was installing on my local dev environment `databricks-connect` as a pip library. This would emulate the whole databricks `dbutils` package, even if wouldn't work locally. But since I just needed to develop to have the func...
3 More Replies
- 444 Views
- 0 replies
- 0 kudos
Hi, I would like to log the notebook id programmatically in R, Is there any command that exists in R so that I can leverage to grab the notebook id, I tried with python using the below command and grab it without any issues, and looking for similar f...
- 444 Views
- 0 replies
- 0 kudos
- 922 Views
- 2 replies
- 1 kudos
We would like to be able to get the run_id in a job run and we have the unfortunate restriction that we cannot use dbutils, is there a way to get it in python?I know for Job ID it's possible to retrieve it from the environment variables.
- 922 Views
- 2 replies
- 1 kudos
Latest Reply
Hi, please refer to the following thread : https://community.databricks.com/s/question/0D58Y00008pbkj9SAA/how-to-get-the-job-id-and-run-id-and-save-into-a-databaseHope this helps
1 More Replies
by
Sunny
• New Contributor III
- 4805 Views
- 8 replies
- 4 kudos
I need to retrieve job id and run id of the job from a jar file in Scala.When I try to compile below code in IntelliJ, below error is shown.import com.databricks.dbutils_v1.DBUtilsHolder.dbutils
object MainSNL {
@throws(classOf[Exception])
de...
- 4805 Views
- 8 replies
- 4 kudos
Latest Reply
Maybe its worth going through the Task Parameter variables section of the below dochttps://docs.databricks.com/data-engineering/jobs/jobs.html#task-parameter-variables
7 More Replies
- 4171 Views
- 10 replies
- 10 kudos
I have a folder which contains multiple delta tables and some parquet tables. I want to move that folder to another path. When I use dbutils.fs.mv(), it takes an absurd amount of time.
- 4171 Views
- 10 replies
- 10 kudos
Latest Reply
Hi @Anmol Deep​ , Did you try to follow @Hubert Dudek​ 's suggestion? Did it help you resolve your problem?
9 More Replies
- 4782 Views
- 3 replies
- 3 kudos
Hi,Using db in SageMaker to connect EC2 to S3. Following other examples I get 'AttributeError: module 'dbutils' has no attribute 'fs'....I guess Im missing an import?
- 4782 Views
- 3 replies
- 3 kudos
Latest Reply
Atanu
Esteemed Contributor
agree with @Werner Stinckens​ . also may try importing dbutils - @ben Hamilton​
2 More Replies
- 2257 Views
- 7 replies
- 1 kudos
Hello,I want to run some notebooks from notebook "A".And regardless of the contents of the some notebook, it is run for a long time (20 seconds). It is constans value and I do not know why it takes so long.I tried run simple notebook with one input p...
- 2257 Views
- 7 replies
- 1 kudos
Latest Reply
Okay I am not able to set the same session for the both notebooks (parent and children).So my result is to use %run ./notebook_name .I put all the code to functions and now I can use them.Example:# Children notebook
def do_something(param1, param2):
...
6 More Replies
- 7523 Views
- 2 replies
- 4 kudos
I have a notebook which has a parameter defined as dbutils.widgets.multiselect("my_param", "ALL", ["ALL", "A", "B", "C")and I would like to pass this parameter when calling the notebook via dbutils.notebook.run()However, I tried passing it as an pyth...
- 7523 Views
- 2 replies
- 4 kudos
Latest Reply
you are right, this actually works fine.I just realized I had two multiselect parameters in my tests and only changing one of them still resulted in the same error message for the second one I ended up writing a function that parses whatever comes in...
1 More Replies
- 2402 Views
- 3 replies
- 1 kudos
I use databricks connect to connect PyCharm with databricks cluster remotely but when I try to get dbutils.widget throw an error.
cluster conf:
spark.databricks.service.server.enabled true
spark.databricks.hive.metastore.glueCatalog.enabled true
...
- 2402 Views
- 3 replies
- 1 kudos
Latest Reply
This is normal behavior. databricks-connect does not support the whole dbutils class.https://docs.databricks.com/dev-tools/databricks-connect.html#access-dbutilsWidgets are not on the list.
2 More Replies
by
Roy
• New Contributor II
- 39768 Views
- 5 replies
- 0 kudos
I am using Python notebooks as part of a concurrently running workflow with Databricks Runtime 6.1.
Within the notebooks I am using try/except blocks to return an error message to the main concurrent notebook if a section of code fails. However I h...
- 39768 Views
- 5 replies
- 0 kudos
Latest Reply
You can add a fake except for the notebook.exit inside try blocktry:
notebook.run(somenotebook)
try:
notebook.exit()
except Exception as e
print("Notebook exited")
except:
print("Main exception")
4 More Replies
- 1257 Views
- 1 replies
- 0 kudos
Is there an easy way I can save the plots generated by the display() cmd?
- 1257 Views
- 1 replies
- 0 kudos
Latest Reply
Plots generated via the display() command are automatically saved under /FileStore/plots. See the documentation for more info: https://docs.databricks.com/data/filestore.html#filestore.However, perhaps an easier approach to save/revisit plots is to u...
- 12264 Views
- 8 replies
- 0 kudos
Is there a way to indicate to dbutils.fs.mount to not throw an error if the mount is already mounted?
And viceversa, for unmount to not throw an error if it is already unmounted?
I am trying to run my notebook as a job and it has a init section that...
- 12264 Views
- 8 replies
- 0 kudos
Latest Reply
If you use scala to mount a gen 2 data lake you could try something like this
/Gather relevant Keys/
var ServicePrincipalID = ""
var ServicePrincipalKey = ""
var DirectoryID = ""
/Create configurations for our connection/
var configs = Map (...
7 More Replies
- 4505 Views
- 1 replies
- 0 kudos
I have read access to an S3 bucket in an AWS account that is not mine. For more than a year I've had a job successfully reading from that bucket using dbutils.fs.mount(...) and sqlContext.read.json(...). Recently the job started failing with the exc...
- 4505 Views
- 1 replies
- 0 kudos
Latest Reply
@andersource
Looks like the bucket is in us-east-1 but you've configured your AmazonS3 Cloud platform with us-west-2. Can you try switching configuring the client to use us-east-1 ?
I hope it will work for you. Thank you
- 5645 Views
- 1 replies
- 0 kudos
I am facing file not found exception when i am trying to move the file with * in DBFS. Here both source and destination directories are in DBFS. I have the source file named "test_sample.csv" available in dbfs directory and i am using the command li...
- 5645 Views
- 1 replies
- 0 kudos
Latest Reply
@bkr, you can reference the file name using dbutils and then pass this to the move command. Here's an example for this in Scala:
val fileNm = dbutils.fs.ls("/usr/krishna/sample").map(_.name).filter(r => r.startsWith("test"))(0)
val fileLoc = "dbfs:/...