cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

kickbuttowski
by New Contributor II
  • 533 Views
  • 1 replies
  • 1 kudos

Resolved! issue in loading the json files in same container with different schemas

Could you tell whether this scenario will work or not  Scenario : i have a container which is having two different json files with diff schemas which will be coming in a streaming manner , i am using an auto loader here to load the files incrementall...

  • 533 Views
  • 1 replies
  • 1 kudos
Latest Reply
MichTalebzadeh
Contributor
  • 1 kudos

Short answer is no. A single Spark AutoLoader typically cannot handle JSON files in a container with two different schemas by default.. AutoLoader relies on schema inference to determine the data structure. It analyses a sample of data from files ass...

  • 1 kudos
Sans1
by New Contributor II
  • 489 Views
  • 2 replies
  • 1 kudos

Delta table vs dynamic views

Hi,My current design is to host the gold layer as dynamic views with masking. I will have couple of use cases that needs the views to be queried with filters.Does this provide equal performance like tables (which has data skipping based on transactio...

  • 489 Views
  • 2 replies
  • 1 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 1 kudos

Hi @Sans1  Have you only used masking, or you have used any row or column level access control?If it's only masking, then you should go with delta table and if it's row or column level access control then you should prefer dynamic views

  • 1 kudos
1 More Replies
Sans
by New Contributor III
  • 1000 Views
  • 7 replies
  • 3 kudos

Unable to create new compute in community databricks

Hi Team,I am unable to create computer in databricks community due to below error. Please advice.Bootstrap Timeout:Node daemon ping timeout in 780000 ms for instance i-0ab6798b2c762fb25 @ 10.172.246.217. Please check network connectivity between the ...

  • 1000 Views
  • 7 replies
  • 3 kudos
Latest Reply
Sans
New Contributor III
  • 3 kudos

This issue was resolved for some time but again reoccurring from yesterday. Please advice

  • 3 kudos
6 More Replies
colinsorensen
by New Contributor III
  • 606 Views
  • 1 replies
  • 0 kudos

Upgrading to UC. Parent external location for path `s3://____` does not exist"...but it does?

Topic. I am trying to upgrade some external tables in our hive metastore to the unity catalog. I used the upgrade functionality in the UI, as well as its provided SQL: CREATE TABLE `unity_catalog`.`default`.`table` LIKE `hive_metastore`.`schema`.`tab...

  • 606 Views
  • 1 replies
  • 0 kudos
Latest Reply
colinsorensen
New Contributor III
  • 0 kudos

When I have tried to edit the location to include the dbfs component (CREATE TABLE`unity_catalog`.`default`.`table`LIKE`hive_metastore`.`schema`.`table` LOCATION 'dbfs:/mnt/foobarbaz')I get a new error:"[UPGRADE_NOT_SUPPORTED.UNSUPPORTED_FILE_SCHEME]...

  • 0 kudos
colinsorensen
by New Contributor III
  • 1533 Views
  • 3 replies
  • 1 kudos

"All trials either failed or did not return results to hyperopt." AutoML is not working on a fairly simple classification problem.

First the exploratory notebook fails, though when I run it manually it works just fine.After that, the AutoML notebook eventually fails without completing any trials. I get this: Tried to attach usage logger `pyspark.databricks.pandas.usage_logger`, ...

  • 1533 Views
  • 3 replies
  • 1 kudos
Latest Reply
colinsorensen
New Contributor III
  • 1 kudos

Ultimately this problem magically resolved itself. I think I updated the cluster or something.

  • 1 kudos
2 More Replies
Avinash_Narala
by New Contributor III
  • 1257 Views
  • 3 replies
  • 0 kudos

Bootstrap Timeout: DURING CLUSTER START

Hi,When I start a cluster, I am getting below error:Bootstrap Timeout:[id: InstanceId(i-05bbcfbb30027ce2c), status: INSTANCE_INITIALIZING, workerEnvId:WorkerEnvId(workerenv-2247916891060257-01b40fb4-3eb1-4a26-99b4-30d6aa0bfe83), lastStatusChangeTime:...

  • 1257 Views
  • 3 replies
  • 0 kudos
Latest Reply
dhtubong
New Contributor II
  • 0 kudos

Hello - if you're using DB Community Edition and having Bootstrap Timeout issue, then below resolution may help.Error: Bootstrap Timeout:Node daemon ping timeout in 780000 ms for instance i-00f21ee2d3ca61424 @ 10.172.245.1. Please check network conne...

  • 0 kudos
2 More Replies
Dick1960
by New Contributor II
  • 1332 Views
  • 3 replies
  • 2 kudos

how to know what is the domain of my databricks workspace

hi,I'm trying to open a support case and it asks me for my domain. in the browser I have: https://adb-27xxxx4341636xxx.5.azuredatabricks.net can you help me ? 

  • 1332 Views
  • 3 replies
  • 2 kudos
Latest Reply
Tharun-Kumar
Honored Contributor II
  • 2 kudos

@Dick1960 The numeric value you have in the workspace URL is the domain name.In your case, it would be 27xxxx4341636xxx

  • 2 kudos
2 More Replies
Brad
by Contributor
  • 377 Views
  • 2 replies
  • 0 kudos

WAL for structured streaming

Hi, I cannot find deep-dive on this from latest links. So far the understanding is:Previously SS (structured streaming) copies and caches the data in WAL. After a version, with retrieve less, SS doesn't copy the data to WAL any more, and only stores ...

  • 377 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Your understanding is partially correct. Let’s delve into the details of Structured Streaming in Apache Spark. Write-Ahead Log (WAL): In the past, Structured Streaming used to copy and cache data in the Write-Ahead Log (WAL).The WAL served as a r...

  • 0 kudos
1 More Replies
lilo_z
by New Contributor III
  • 929 Views
  • 3 replies
  • 0 kudos

Resolved! Databricks Asset Bundles - job specific "run_as" user/service_principle

Was wondering if this was possible, since a use case came up in my team. Would it be possible to use a different service principle for a single job than what is specified for that target environment? For example:bundle: name: hello-bundle resource...

  • 929 Views
  • 3 replies
  • 0 kudos
Latest Reply
lilo_z
New Contributor III
  • 0 kudos

Found a working solution, posting it here for anyone else hitting the same issue - trick was to redefine "resources" under the target you want to make an exception for:bundle: name: hello_bundle include: - resources/*.yml targets: dev: w...

  • 0 kudos
2 More Replies
dbx-user7354
by New Contributor III
  • 935 Views
  • 3 replies
  • 3 kudos

Create a Job via SKD with JobSettings Object

Hey, I want to create a Job via the Python SDK with a JobSettings object.import os import time from databricks.sdk import WorkspaceClient from databricks.sdk.service import jobs from databricks.sdk.service.jobs import JobSettings w = WorkspaceClien...

  • 935 Views
  • 3 replies
  • 3 kudos
Latest Reply
nenetto
New Contributor II
  • 3 kudos

I just faced the same problem. The issue is that the when you do JobSettings.as_dict()the settings are parsed to a dict where all the values are also parsed recursively. When you pass the parameters as **params, the create method again tries to parse...

  • 3 kudos
2 More Replies
noname123
by New Contributor III
  • 751 Views
  • 2 replies
  • 0 kudos

Resolved! Delta table version protocol

I do:df.write.format("delta").mode("append").partitionBy("timestamp").option("mergeSchema", "true").save(destination)If table doesn't exist, it creates new table with "minReaderVersion":3,"minWriterVersion":7.Yesterday it was creating table with "min...

  • 751 Views
  • 2 replies
  • 0 kudos
Latest Reply
noname123
New Contributor III
  • 0 kudos

Thanks for help.Issue was caused by "Auto-Enable Deletion Vectors" setting. 

  • 0 kudos
1 More Replies
nihar_ghude
by New Contributor II
  • 897 Views
  • 2 replies
  • 0 kudos

OSError: [Errno 107] Transport endpoint is not connected

Hi,I am facing this error when performing write operation in foreach() on a dataframe. The piece of code was working fine for over 3 months but started failing since last week.To give some context, I have a dataframe extract_df which contains 2 colum...

nihar_ghude_0-1710175215407.png
Data Engineering
ADLS
azure
python
spark
  • 897 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @nihar_ghude,  Instead of using foreach(), consider using foreachBatch(). This method allows you to apply custom logic on the output of each micro-batch, which can help address parallelism issues.Unlike foreach(), which operates on individual rows...

  • 0 kudos
1 More Replies
oussValrho
by New Contributor
  • 906 Views
  • 1 replies
  • 0 kudos

Cannot resolve due to data type mismatch: incompatible types ("STRING" and ARRAY<STRING>

hey i have this error from a while : Cannot resolve "(needed_skill_id = needed_skill_id)" due to data type mismatch: the left and right operands of the binary operator have incompatible types ("STRING" and "ARRAY<STRING>"). SQLSTATE: 42K09;and these ...

  • 906 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @oussValrho, The error message you’re encountering indicates a data type mismatch in your SQL query. Specifically, it states that the left and right operands of the binary operator have incompatible types: a STRING and an ARRAY<STRING>. Let’s bre...

  • 0 kudos
Lightyagami
by New Contributor
  • 2114 Views
  • 1 replies
  • 0 kudos

Save workbook with macros

Hi, Is there any way to save a workbook without losing the macros in databricks?

Data Engineering
Databricks
pyspark
  • 2114 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Lightyagami, When working with Databricks and dealing with macros, there are a few approaches you can consider to save a workbook without losing the macros: Export to Excel with Macros Enabled: You can generate an Excel file directly from PyS...

  • 0 kudos
philipkd
by New Contributor III
  • 492 Views
  • 1 replies
  • 0 kudos

Cannot get past Query Data tutorial for Azure Databricks

I created a new workspace on Azure Databricks, and I can't get past this first step in the tutorial: DROP TABLE IF EXISTS diamonds; CREATE TABLE diamonds USING CSV OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", hea...

  • 492 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @philipkd, It appears you’ve encountered an issue while creating a table in Azure Databricks using the Unity Catalog. Let’s address this step by step: URI Format: The error message indicates that the URI for your CSV file is missing a cloud f...

  • 0 kudos
Labels