cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Avinash_Narala
by New Contributor III
  • 1072 Views
  • 3 replies
  • 0 kudos

Bootstrap Timeout: DURING CLUSTER START

Hi,When I start a cluster, I am getting below error:Bootstrap Timeout:[id: InstanceId(i-05bbcfbb30027ce2c), status: INSTANCE_INITIALIZING, workerEnvId:WorkerEnvId(workerenv-2247916891060257-01b40fb4-3eb1-4a26-99b4-30d6aa0bfe83), lastStatusChangeTime:...

  • 1072 Views
  • 3 replies
  • 0 kudos
Latest Reply
dhtubong
New Contributor II
  • 0 kudos

Hello - if you're using DB Community Edition and having Bootstrap Timeout issue, then below resolution may help.Error: Bootstrap Timeout:Node daemon ping timeout in 780000 ms for instance i-00f21ee2d3ca61424 @ 10.172.245.1. Please check network conne...

  • 0 kudos
2 More Replies
Dick1960
by New Contributor II
  • 1209 Views
  • 3 replies
  • 2 kudos

how to know what is the domain of my databricks workspace

hi,I'm trying to open a support case and it asks me for my domain. in the browser I have: https://adb-27xxxx4341636xxx.5.azuredatabricks.net can you help me ? 

  • 1209 Views
  • 3 replies
  • 2 kudos
Latest Reply
Tharun-Kumar
Honored Contributor II
  • 2 kudos

@Dick1960 The numeric value you have in the workspace URL is the domain name.In your case, it would be 27xxxx4341636xxx

  • 2 kudos
2 More Replies
Brad
by Contributor
  • 325 Views
  • 2 replies
  • 0 kudos

WAL for structured streaming

Hi, I cannot find deep-dive on this from latest links. So far the understanding is:Previously SS (structured streaming) copies and caches the data in WAL. After a version, with retrieve less, SS doesn't copy the data to WAL any more, and only stores ...

  • 325 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Your understanding is partially correct. Let’s delve into the details of Structured Streaming in Apache Spark. Write-Ahead Log (WAL): In the past, Structured Streaming used to copy and cache data in the Write-Ahead Log (WAL).The WAL served as a r...

  • 0 kudos
1 More Replies
lilo_z
by New Contributor III
  • 698 Views
  • 3 replies
  • 0 kudos

Resolved! Databricks Asset Bundles - job specific "run_as" user/service_principle

Was wondering if this was possible, since a use case came up in my team. Would it be possible to use a different service principle for a single job than what is specified for that target environment? For example:bundle: name: hello-bundle resource...

  • 698 Views
  • 3 replies
  • 0 kudos
Latest Reply
lilo_z
New Contributor III
  • 0 kudos

Found a working solution, posting it here for anyone else hitting the same issue - trick was to redefine "resources" under the target you want to make an exception for:bundle: name: hello_bundle include: - resources/*.yml targets: dev: w...

  • 0 kudos
2 More Replies
dbx-user7354
by New Contributor III
  • 766 Views
  • 3 replies
  • 3 kudos

Create a Job via SKD with JobSettings Object

Hey, I want to create a Job via the Python SDK with a JobSettings object.import os import time from databricks.sdk import WorkspaceClient from databricks.sdk.service import jobs from databricks.sdk.service.jobs import JobSettings w = WorkspaceClien...

  • 766 Views
  • 3 replies
  • 3 kudos
Latest Reply
nenetto
New Contributor II
  • 3 kudos

I just faced the same problem. The issue is that the when you do JobSettings.as_dict()the settings are parsed to a dict where all the values are also parsed recursively. When you pass the parameters as **params, the create method again tries to parse...

  • 3 kudos
2 More Replies
noname123
by New Contributor III
  • 589 Views
  • 2 replies
  • 0 kudos

Resolved! Delta table version protocol

I do:df.write.format("delta").mode("append").partitionBy("timestamp").option("mergeSchema", "true").save(destination)If table doesn't exist, it creates new table with "minReaderVersion":3,"minWriterVersion":7.Yesterday it was creating table with "min...

  • 589 Views
  • 2 replies
  • 0 kudos
Latest Reply
noname123
New Contributor III
  • 0 kudos

Thanks for help.Issue was caused by "Auto-Enable Deletion Vectors" setting. 

  • 0 kudos
1 More Replies
nihar_ghude
by New Contributor II
  • 701 Views
  • 2 replies
  • 0 kudos

OSError: [Errno 107] Transport endpoint is not connected

Hi,I am facing this error when performing write operation in foreach() on a dataframe. The piece of code was working fine for over 3 months but started failing since last week.To give some context, I have a dataframe extract_df which contains 2 colum...

nihar_ghude_0-1710175215407.png
Data Engineering
ADLS
azure
python
spark
  • 701 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @nihar_ghude,  Instead of using foreach(), consider using foreachBatch(). This method allows you to apply custom logic on the output of each micro-batch, which can help address parallelism issues.Unlike foreach(), which operates on individual rows...

  • 0 kudos
1 More Replies
oussValrho
by New Contributor
  • 663 Views
  • 1 replies
  • 0 kudos

Cannot resolve due to data type mismatch: incompatible types ("STRING" and ARRAY<STRING>

hey i have this error from a while : Cannot resolve "(needed_skill_id = needed_skill_id)" due to data type mismatch: the left and right operands of the binary operator have incompatible types ("STRING" and "ARRAY<STRING>"). SQLSTATE: 42K09;and these ...

  • 663 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @oussValrho, The error message you’re encountering indicates a data type mismatch in your SQL query. Specifically, it states that the left and right operands of the binary operator have incompatible types: a STRING and an ARRAY<STRING>. Let’s bre...

  • 0 kudos
Lightyagami
by New Contributor
  • 1881 Views
  • 1 replies
  • 0 kudos

Save workbook with macros

Hi, Is there any way to save a workbook without losing the macros in databricks?

Data Engineering
Databricks
pyspark
  • 1881 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Lightyagami, When working with Databricks and dealing with macros, there are a few approaches you can consider to save a workbook without losing the macros: Export to Excel with Macros Enabled: You can generate an Excel file directly from PyS...

  • 0 kudos
philipkd
by New Contributor III
  • 413 Views
  • 1 replies
  • 0 kudos

Cannot get past Query Data tutorial for Azure Databricks

I created a new workspace on Azure Databricks, and I can't get past this first step in the tutorial: DROP TABLE IF EXISTS diamonds; CREATE TABLE diamonds USING CSV OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", hea...

  • 413 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @philipkd, It appears you’ve encountered an issue while creating a table in Azure Databricks using the Unity Catalog. Let’s address this step by step: URI Format: The error message indicates that the URI for your CSV file is missing a cloud f...

  • 0 kudos
alxsbn
by New Contributor III
  • 659 Views
  • 1 replies
  • 0 kudos

Resolved! Compute pool and AWS instance profiles

Hi everyone,We're looking at using the compute pool feature. Now we're mostly relying on all-purpose and job compute. On these two we're using instance profiles to let the clusters access our s3 buckets and more.We don't see anything related to insta...

  • 659 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @alxsbn , Let’s delve into the details of compute pools and instance profiles. Compute Pools: Compute pools in Databricks allow you to manage and allocate compute resources efficiently. They provide a way to organize and share compute resource...

  • 0 kudos
BjarkeM
by New Contributor II
  • 1504 Views
  • 6 replies
  • 0 kudos

Schema migration of production delta tables

GoalWe would like to be in control of schema migrations of delta tables in all dev and production environments, and it must be automatically deployed.I anticipated this to be a common problem with a well-known standard solution. But unfortunately, I ...

  • 1504 Views
  • 6 replies
  • 0 kudos
Latest Reply
zerobugs
New Contributor II
  • 0 kudos

Hello, so does this mean that it's necessary to migrate away from hive_metastore to unity_catalog in order to be able to use schema migrations?

  • 0 kudos
5 More Replies
GOW
by New Contributor II
  • 213 Views
  • 2 replies
  • 1 kudos

Databricks to s3

I am new to data engineering in Databricks. I need some guidance surrounding Databricks to s3. Can I get an example job or approach to do this?

  • 213 Views
  • 2 replies
  • 1 kudos
Latest Reply
GOW
New Contributor II
  • 1 kudos

Thank you for the reply. Can I apply this to dbt or using a dbt macro to unload the data? So dbt models running in Databricks?

  • 1 kudos
1 More Replies
exilon
by New Contributor
  • 427 Views
  • 1 replies
  • 0 kudos

DLT streaming with sliding window missing last windows interval

Hello, I have a DLT pipeline where I want to calculate the rolling average of a column for the last 24 hours which is updated every hour.I'm using the below code to achieve this:       @Dlt.table() def gold(): df = dlt.read_stream("silver_table")...

Data Engineering
dlt
spark
streaming
window
  • 427 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @exilon, It seems like you’re trying to calculate a rolling average for a specific time window in your DLT pipeline. Let’s address the issue you’re facing. The behavior you’re observing is due to the way the window specification is defined. Whe...

  • 0 kudos
dbph
by New Contributor
  • 544 Views
  • 1 replies
  • 0 kudos

Databricks asset bundles error "failed to instantiate provider"

Hi all,I'm trying to deploy with databricks asset bundles. When running bundle deploy, the process fails with following error message:failed execution pid=25092 exit_code=1 error="terraform apply: exit status 1\n\nError: failed to read schema for dat...

  • 544 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @dbph , It seems you’re encountering an issue with deploying Databricks Asset Bundles. Let’s troubleshoot this step by step. Terraform Provider Issue: The error message indicates a problem with the Terraform provider for Databricks. Specifical...

  • 0 kudos
Labels
Top Kudoed Authors