Detailed logs for R process
We have a user notebook in R that reliably crashes the driver. Are detailed logs from the R process stored somewhere on drivers/workers?
- 843 Views
- 0 replies
- 0 kudos
We have a user notebook in R that reliably crashes the driver. Are detailed logs from the R process stored somewhere on drivers/workers?
How are index columns handled in Koalas? What about multi-level indices?
I know that I can do a DESCRIBE DETAIL on a table to get current delta table version details. If I want to get these same details on a previous version, how can I do that?
I have a function within a module in my git-repo. I want to import that to my DB notebook - how can I do that?
Databricks Repos allows you to sync your work in Databricks with a remote Git repository. This makes it easier to implement development best practices. Databricks supports integrations with GitHub, Bitbucket, and GitLab. Using Repos you can bring you...
I know I can disable Databricks PAT tokens from being used, but what about AAD tokens?
Does anyone know how to debug notebook code using IntelliJ or is there any other tool for it?like debugging in Spark cluster using export SPARK_SUBMIT_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005are there any similar sett...
I have a dataframe with a lot of columns (20 or so) and 8 rows. Part of the output is being cutoff and I can scroll to the right to see the rest of the columns but I was just wondering if it was possible to somehow "zoom out" of the table so I can se...
how to pass arguments and variables to databricks python activity from azure data factory
try importing argv from sys. Then if you have the parameter added correctly in DataFactory you could get it in your python script typing argv[1] (index 0 is the file path).
What is Databricks Database?A Databricks database is a collection of tables. A Databricks table is a collection of structured data. You can cache, filter, and perform any operations supported by Apache Spark DataFrames on Databricks tables. You can q...
You can implement custom algorithms for GraphFrames using either Scala/Java or Python APIs. GraphFrames provides some structures to simplify writing graph algorithms; the three primary options are as follow, with the best options first:Pregel: This i...
I'm trying to run multiple spark submits on a Databricks job cluster but can't figure out how. Any tips?
I am trying to read with the following syntaxval df = spark.read .format("jdbc") .option("url", "<url>") .option("query", "SELECT * FROM oracle_test_table)") .option("user", "<user>") .option("password", "<password>") .option("driver", "oracle...
https://kb.databricks.com/data-sources/query-option-not-work-oracle.html#problem-apache-spark-jdbc-datasource-query-option-doesnt-work-for-oracle-database
Databricks starts to charge for DBUs once the virtual machine is up and the Spark context is initialized, which may include a portion of start up costs, but not all. Init scripts are loaded before the Spark context is initialized, which therefore wou...
Databricks pricing related question - Do I consume more DBUs when I attach more notebooks to the same cluster?
Hey PJ, short answer is - No, attaching more notebooks does not increase the price of the cluster, which is solely based on compute power. Attaching more notebooks to the cluster is a value-add of the platform.If you're interested, you can find some ...
Things I want admins to know!
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group