VPC status is BROKEN
Hi,AllWhen I check cloud resources, the VPC status is BROKEN.However, the cluster is runnning without any problems.What is the BROKEN state?And how can I get it healthy?Regards.
- 1918 Views
- 1 replies
- 2 kudos
Hi,AllWhen I check cloud resources, the VPC status is BROKEN.However, the cluster is runnning without any problems.What is the BROKEN state?And how can I get it healthy?Regards.
Hey all, We're trying to analyze the data in a 23 GB JSON file. We're using the basic starter cluster - one node, 2 cpu x 8GB.We can read the JSON file into a spark dataframe and print out the schema but if we try and do any operations that won't c...
Hi @Jay Allen you can refer to the cluster sizing doc.
Hi there.I can get Databricks cost(dbus) from usage_log. But, how do I get AWS cost information?I want to show Databricks and AWS cost in my Databricks SQL Dashborad.
Hi @Kaniz Fatma @Prabakar Ammeappin Sorry for the late reply.Every answers was helpful for me! My problem has been solved. Thanks!
How do I configure plot options through the `display` function as code (not through the interactive UI)? Specifically asking since when a notebook is scheduled to run in a databricks job there is no way of configuring the plot type.
I want to create a personal access token for a service principal so that I can use that service principal personal access token in the databricks-connect configure command in an automated build. I followed the instructions from here.https://docs.data...
@Vikas B https://docs.databricks.com/dev-tools/api/latest/scim/scim-sp.html#scim-api-20-serviceprincipals let me know if this helps.
I am facing issue in while accessing python data frame in Scala shell and vice versa. I am getting error variable not defined.
The context is not shared between Scala and Python so you won't be able to access the same variables directly. However you can use createOrReplaceTempView to create a temporary view of your dataframe and read it in the other language with read_df = s...
I need to create a dataset that is dependent on multiple streaming datasets. However, when I attempt to create the new single stream I am getting an error. Append output mode not supported when there are streaming aggregations on streaming DataFrame...
Hi Kaniz/Jose, I was able to resolve the issue. I used 'union all' to avoid aggregation on the stream and have it continue to write to the table in append mode.This issue can be closed.
I need to update most of the settings that are visible on the Admin Console UI by using Terraform. In another post in this forum I saw that I can use `custom_config` in a `databricks_workspace_conf` resource to achieve that but the options seem limit...
Ok, looks like I can inspect the network and see which flags are sent to the endpoint. Tried that and it worked.
In my current company, we have a Hadoop cluster in which we extensively use conda environments and conda-packs. What are the requirements for Databricks to work with this setup?
unable to create delta tables in aws glue catalogThe project requires that we integrate with the AWS Glue catalog.We would like to be able to create tables in delta format in the glue catalog.To test this functionality. We did the followingCreated th...
When I attempt to save my username and token for Github I receive a “Failed to Save. Try again.” message. I’ve used Azure DevOps with another DB workspace and never had an issue saving my PAT. I’ve tried using both my GitHub username and email wi...
Quick update that I’ve now attempted to save my PAT for Github using two different computers and browser types (Safari and Chrome) and all have given the same “Failed to save. Please try again” message. Thankfully I can still clone from public repo...
I am trying to write data from databricks to an S3 bucket but when I submit the code, it runs and runs and does not make any progress. I am not getting any errors and the logs don't seem to recognize I've submitted anything. The cluster also looks un...
Can you please check the driver log4j to see what is happening?
The Problem:I've observed erratic behavior when I add a comment containing a trailing escape character (\) to a CREATE TABLE statement.For example, this query returns data (though it shouldn't):CREATE TABLE example_table SELECT 1 -- This comment has ...
@Graham Carman we're tracking this as a defect / issue on our side. For now, please don't include the escape character in comments.
What is the best way to parallelize fbprophet?
I am using managed databricks on gcp. I have 11TB of data with 5B rows. Data from source is not partitioned. I'm having trouble loading the data into dataframe and do further data processing. I have tried couple of executors configuration , none of t...
| User | Count |
|---|---|
| 1644 | |
| 1050 | |
| 791 | |
| 553 | |
| 349 |