cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

lndlzy
by New Contributor II
  • 2351 Views
  • 3 replies
  • 0 kudos

Resolved! ADD_NODES_FAILED Cluster Does Not Start

Hello everyone, I tried to change a Databricks Runtime Cluster from 12.2 LTS ML to 13.3 LTS ML, however I got this error: Failed to add 1 container to the compute. Will attempt retry: false. Reason: Global init script failureGlobal init script Instal...

  • 2351 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @lndlzy, Based on the information, your error is related to a global init script failure when changing the Databricks Runtime Cluster from 12.2 LTS ML to 13.3 LTS ML. This error indicates that the worldwide init script failed with a non-zero exit ...

  • 0 kudos
2 More Replies
TimReddick
by New Contributor III
  • 3928 Views
  • 7 replies
  • 2 kudos

Using run_job_task in Databricks Asset Bundles

Do Databrick Asset Bundles support run_job_task tasks?I've made various attempts to add a run_job_task with a specified job_id. See my the code_snippet below. I tried substituting the job_id using ${...} syntax, as well as three other ways which I've...

Data Engineering
Databrick Asset Bundles
run_job_task
  • 3928 Views
  • 7 replies
  • 2 kudos
Latest Reply
kyle_r
New Contributor II
  • 2 kudos

Ah, I see it is a known bug in the Databricks CLI: Asset bundle run_job_task fails · Issue #812 · databricks/cli (github.com). Anyone facing this issue should comment on and keep an eye on that ticket for resolution. 

  • 2 kudos
6 More Replies
User16765131552
by Contributor III
  • 2507 Views
  • 3 replies
  • 0 kudos

Resolved! Pull Cluster Tags

Does anybody know any in-notebook or JAR code to pull cluster tags from the runtime environment? Something like... dbutils.notebook.entry_point.getDbutils().notebook().getContext().tags().apply('user')but for the cluster name?

  • 2507 Views
  • 3 replies
  • 0 kudos
Latest Reply
DatBoi
Contributor
  • 0 kudos

Did you find any documentation for spark.conf.get properties? I am trying to get some metadata about the environment my notebook is running in (specifically cluster custom tags)? But cannot find any information beside a couple of forum posts.

  • 0 kudos
2 More Replies
arielmoraes
by New Contributor III
  • 1150 Views
  • 3 replies
  • 1 kudos

Resolved! Job Concurrency Queue not working as expected

I have a process that should run the same notebook with varying parameters, thus translating to a job with queue and concurrency enabled. When the first executions are triggered the Jobs Runs work as expected, i.e. if the job has a max concurrency se...

arielmoraes_0-1696872175101.png arielmoraes_1-1696872724206.png
  • 1150 Views
  • 3 replies
  • 1 kudos
Latest Reply
arielmoraes
New Contributor III
  • 1 kudos

Hi @Kaniz, we double-checked everything, the resources are enough and all settings are properly set. I'll reach out the support by filing a new ticket. Thank you for your help.

  • 1 kudos
2 More Replies
b_1
by New Contributor II
  • 632 Views
  • 2 replies
  • 1 kudos

to_timstamp function in non-legacy mode does not parse this format: yyyyMMddHHmmssSS

I have this datetime string in my dataset: '2023061218154258' and I want to convert it to datetime, using below code. However the format that I expect to work, doesn't work, namely: yyyyMMddHHmmssSS. This code will reproduce the issue:from pyspark.sq...

  • 632 Views
  • 2 replies
  • 1 kudos
Latest Reply
b_1
New Contributor II
  • 1 kudos

Is there anybody who has the same issue or knows that this is in fact an issue?

  • 1 kudos
1 More Replies
orso
by New Contributor III
  • 3024 Views
  • 1 replies
  • 0 kudos

Resolved! Java - FAILED_WITH_ERROR when saving to snowflake

I'm trying to move data from database A to B on Snowflake. There's no permission issue since using the Python package snowflake.connector  works Databricks runtime version: 12.2 LTS (includes Apache Spark 3.3.2, Scala 2.12)Insert into database B fail...

  • 3024 Views
  • 1 replies
  • 0 kudos
Latest Reply
orso
New Contributor III
  • 0 kudos

Found the problem. The sub-roles didn't have grants to the warehouse.I hope it will help someone one day

  • 0 kudos
erigaud
by Honored Contributor
  • 2619 Views
  • 5 replies
  • 5 kudos

Resolved! DLT overwrite part of the table

Hello !We're currently building a pipeline of file ingestion using a Delta Live Tables pipeline and autoloader. The bronze tables are pretty much the following schema : file_name | file_upload_date | colA | colB (Well, there are actually 250+ columns...

  • 2619 Views
  • 5 replies
  • 5 kudos
Latest Reply
Tharun-Kumar
Honored Contributor II
  • 5 kudos

@erigaud  Using jobs/workflows would be the right choice for this.

  • 5 kudos
4 More Replies
Gilg
by Contributor II
  • 1041 Views
  • 4 replies
  • 2 kudos

DLT: Autoloader Perf

Hi Team,I am looking for some advice to perf tune my bronze layer using DLT.I have the following code very simple and yet very effective. @dlt.create_table(name="bronze_events", comment = "New raw data ingested from storage account ...

Gilg_0-1696561163925.png
  • 1041 Views
  • 4 replies
  • 2 kudos
Latest Reply
Tharun-Kumar
Honored Contributor II
  • 2 kudos

Hi @Gilg  You mentioned that micro-batch time is around 12 minutes recently. Do we also see jobs/stages with 12 minutes in the spark ui. If that is the case, then the processing of the file itself takes 12 minutes. If not, the 12 minutes is spent on ...

  • 2 kudos
3 More Replies
Kaviana
by New Contributor III
  • 1007 Views
  • 2 replies
  • 0 kudos

internal server error when creating workspace

I tried to create a workspace and it is not generated either automatically or manually. The strange thing is that it stopped working after a certain time. It seems like an internal Databricks error but it is not known if it is like that or a bug, wha...

  • 1007 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Kaviana , Thank you for posting your question in our community! We are happy to assist you. To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers you...

  • 0 kudos
1 More Replies
N_M
by New Contributor III
  • 910 Views
  • 2 replies
  • 0 kudos

Resolved! Unzip multipart files

Hi all,Due to file size and file transfer limitation, we are receiving huge files compressed and split, in the format    FILE.z01, FILE.z02,...,FILE.zipHowever, I can't find a way to unzip multipart files using databricks.I tried already some of the ...

Data Engineering
bash
unzip
  • 910 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @N_M , Thank you for posting your question in our community! We are happy to assist you. To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your qu...

  • 0 kudos
1 More Replies
phoebe_dt
by New Contributor
  • 1961 Views
  • 2 replies
  • 1 kudos

Access denied error to s3 bucket in Databricks notebook

When running a databricks notebook connected to an s3 cluster I randomly but frequently experience the following error: java.nio.file.AccessDeniedException: s3://mybucket: getFileStatus on s3://mybucket: com.amazonaws.services.s3.model.AmazonS3Except...

Data Engineering
access denied
AWS
databricks notebook
S3
  • 1961 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @phoebe_dt , Thank you for posting your question in our community! We are happy to assist you. To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 1 kudos
1 More Replies
Monika_Bagyal
by New Contributor
  • 2218 Views
  • 1 replies
  • 0 kudos

Access denied error while reading file from S3 to spark

I'm seeing the access denied error from spark cluster while reading s3 file into notebook.Running on personal single user compute with LTS 13.3 ML.configs setup looks like this:spark.conf.set("spark.hadoop.fs.s3a.access.key", access_id)spark.conf.set...

  • 2218 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Monika_Bagyal , The "Access Denied" error you are seeing is likely due to insufficient permissions to read the S3 bucket.    The configurations you've set up are correct for accessing S3 using temporary AWS credentials, but the credentials themse...

  • 0 kudos
Gilg
by Contributor II
  • 1072 Views
  • 3 replies
  • 1 kudos

APPLY_CHANGES late arriving data

Hi Team,I have a DLT pipeline that uses APPLY_CHANGES to our Silver tables. I am using Id as keys and timestamp to know the sequence of the incoming data. Question: How does APPLY_CHANGES handles late arriving data?i.e., for silver_table_1, the data ...

  • 1072 Views
  • 3 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @Gilg , The APPLY_CHANGES function in Databricks Delta Live Tables handles late arriving data using a specified SEQUENCE BY column, which in your case is the timestamp. It uses this column to propagate appropriate sequencing values to the __START_...

  • 1 kudos
2 More Replies
PradyumnJoshi
by New Contributor
  • 875 Views
  • 2 replies
  • 0 kudos

Resolved! Databricks Academy - Advanced Data Engineering - Notebook Error while loading configurations

Hi Databricks Academy team,I am getting below errors while running classroom setup command in Databricks Academy - Advanced data engineering course Notebooks in  databricks community edition. Please help me resolve it. #databricksacademy #advanceddat...

PradyumnJoshi_1-1696489013813.png PradyumnJoshi_0-1696488924057.png
  • 875 Views
  • 2 replies
  • 0 kudos
Latest Reply
User16847923431
Contributor II
  • 0 kudos

Hi, all. Our apologies - the Advanced Data Engineering with Databricks course will not run on Databricks Community Edition. If you would like a lab environment to run this course on, please see the new paid lab subscription available via the Databric...

  • 0 kudos
1 More Replies
dng
by New Contributor III
  • 3605 Views
  • 8 replies
  • 11 kudos

Databricks JDBC Driver v2.6.29 Cloud Fetch failing for Windows Operating System

Hi everyone, I've been stuck for the past two days on this issue with my Databricks JDBC driver and I'm hoping someone can give me more insight into how to troubleshoot. I am using the Databricks JDBC driver in RStudio and the connection was working ...

  • 3605 Views
  • 8 replies
  • 11 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 11 kudos

@Debbie Ng​ From your message I see there was a windows update and this failure started. based on the conversation you tried latest version of the driver and still you face the problem. I believe this is something related to the Java version compatib...

  • 11 kudos
7 More Replies
Labels
Top Kudoed Authors