cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

gabrieleladd
by Visitor
  • 3 Views
  • 0 replies
  • 0 kudos

Clearing data stored by pipelines

Hi everyone! I'm new to Databricks and moving my first steps with Delta Live Tables, so please forgive my inexperience. I'm building my first DLT pipeline and there's something that I can't really grasp: how to clear all the objects generated or upda...

Data Engineering
Data Pipelines
Delta Live Tables
  • 3 Views
  • 0 replies
  • 0 kudos
jorperort
by New Contributor
  • 59 Views
  • 1 replies
  • 0 kudos

[Databricks Assets Bundles] no deployment state

Good morning, I'm trying to run: databricks bundle run --debug -t dev integration_tests_job My bundle looks: bundle: name: x include: - ./resources/*.yml targets: dev: mode: development default: true workspace: host: x r...

Data Engineering
Databricks Assets Bundles
Deployment Error
pid=265687
  • 59 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @jorperort,    The error message you’re seeing, “no deployment state. Did you forget to run ‘databricks bundle deploy’?”, indicates that the deployment state is missing.   Here are some steps you can take to resolve this issue: Verify Deploym...

  • 0 kudos
htu
by Visitor
  • 49 Views
  • 2 replies
  • 0 kudos

Installing Databricks Connect breaks pyspark local cluster mode

Hi, It seems that when databricks-connect is installed, pyspark is at the same time modified so that it will not anymore work with local master node. This has been especially useful in testing, when unit tests for spark-related code without any remot...

  • 49 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @htu, When you install Databricks Connect, it modifies the behaviour of PySpark in a way that prevents it from working with the local master node. This can be frustrating, especially when you’re trying to run unit tests for Spark-related code w...

  • 0 kudos
1 More Replies
Fnazar
by New Contributor
  • 44 Views
  • 1 replies
  • 0 kudos

Billing of Databricks Job clusters

Hi All,Please help me understand how the billing is calculated for using the Job cluster.Document says they are charged hourly basis, so if my job ran for 1hr 30mins then will be charged for the 30mins based on the hourly rate or it will be charged f...

  • 44 Views
  • 1 replies
  • 0 kudos
Latest Reply
PL_db
New Contributor III
  • 0 kudos

Job clusters consume DBUs per hour depending on the VM size. The Databricks billing happens at "per second granularity", see here. That means if you run your job for 1.5 hours, you will be charged DBUs/hour*1.5*SKU_price; accordingly, if you run your...

  • 0 kudos
mamiya
by Visitor
  • 43 Views
  • 1 replies
  • 0 kudos

ODBC PowerBI 2 commands in one query

 Hello everyone,I'm trying to use the ODBC DirectQuery option in PowerBI, but I keep getting an error about another command. The SQL query works while using the SQL Editor. Do I need to change the setup of my ODBC connector?DECLARE dateFrom DATE = DA...

mamiya_0-1714651686806.png mamiya_3-1714651948145.png
  • 43 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @mamiya , Here are a few steps you can take to address the error: Check Power Query Editor Steps: The error might be related to a specific step in the Power Query Editor. Try opening the Power Query Editor and reviewing the steps. If there’s a...

  • 0 kudos
stevenayers-bge
by New Contributor II
  • 94 Views
  • 3 replies
  • 2 kudos

Bug: Shallow Clone `create or replace` causing [TABLE_OR_VIEW_NOT_FOUND]

I am having an issue where when I do a shallow clone using :create or replace table `catalog_a_test`.`schema_a`.`table_a` shallow clone `catalog_a`.`schema_a`.`table_a` I get:[TABLE_OR_VIEW_NOT_FOUND] The table or view catalog_a_test.schema_a.table_a...

  • 94 Views
  • 3 replies
  • 2 kudos
Latest Reply
Omar_hamdan
Community Manager
  • 2 kudos

Hi StevenThis is really a strange issue. First let's exclude some possible causes for this. We need to check the following:- The permission to table A and Catalog B. take a look at the following link to check what permission is needed: https://docs.d...

  • 2 kudos
2 More Replies
radothede
by Visitor
  • 42 Views
  • 1 replies
  • 0 kudos

Can on-demand clusters be shared across multiple jobs using cluster pool with max capacity ?

I have a cluster pool with max capacity. I run multiple jobs against that cluster pool.Can on-demand clusters, created within this cluster pool, be shared across multiple different jobs, at the same time?The reason I'm asking is I can see a downgrade...

  • 42 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @radothede, Cluster Pools and On-Demand Clusters: In Azure Databricks, a cluster pool is a collection of idle, pre-configured clusters that can be shared among multiple users or jobs. Instead of giving each user their own dedicated cluster, you...

  • 0 kudos
Erik_L
by Contributor II
  • 58 Views
  • 1 replies
  • 0 kudos

BUG: Unity Catalog kills UDF

We have UDFs in a few locations and today we noticed they died in performance. This seems to be caused by Unity Catalog.Test environment 1:Databricks Runtime Environment: 14.3 / 15.1Compute: 1 master, 4 nodesPolicy: UnrestrictedAccess Mode: SharedTes...

  • 58 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Erik_L , It appears that you’re experiencing performance issues related to Unity Catalog in your Databricks environment. Let’s explore some potential reasons and solutions: Mismanagement of Metastores: Unity Catalog, with one metastore per re...

  • 0 kudos
Kayl669
by New Contributor III
  • 154 Views
  • 5 replies
  • 0 kudos

SQL code against tables with '>' in headers suddenly failing?

Just want to post this issue we're experiencing here in case other people are facing something similar. Below is the wording of the support ticket request I've raised:SQL code that has been working is suddenly failing due to syntax errors today. Ther...

  • 154 Views
  • 5 replies
  • 0 kudos
Latest Reply
Kayl669
New Contributor III
  • 0 kudos

The point that we've got to with this is that MS Support / Databricks have acknowledged that they did something and are working on a fix. "The issue occurred due to the regression in the recent DBR maintenance release...Our engineering team is workin...

  • 0 kudos
4 More Replies
erigaud
by Honored Contributor
  • 78 Views
  • 2 replies
  • 1 kudos

Pass Dataframe to child job in "Run Job" task

Hello,I have a Job A that runs a Job B, and Job A defines a globalTempView and I would like to somehow access it in the child job. Is that in anyway possible ? Can the same cluster be used for both jobs ? If it is not possible, does someone know of a...

  • 78 Views
  • 2 replies
  • 1 kudos
Latest Reply
erigaud
Honored Contributor
  • 1 kudos

Hello  @Kaniz ,thank you for the very detailed answer. If I understand correctly there is no way to do this using temp views and using a Job Cluster ? I need in the case to use the same All-purpose for all my tasks in order to remain in the same spar...

  • 1 kudos
1 More Replies
Mathias_Peters
by New Contributor II
  • 88 Views
  • 1 replies
  • 0 kudos

On the fly transformations on DLT tables

Hi, I am loading data from a kinesis data stream using DLT. CREATE STREAMING TABLE Consumers_kinesis_2 ( ..., unbase64(data) String, ... ) AS SELECT * FROM STREAM read_kinesis (...) Is it possible to directly cast, unbase64, and/or transform the resu...

  • 88 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Mathias_Peters, When working with Amazon Kinesis Data Analytics, you can indeed transform data before writing it into a streaming table. Let’s explore some options: Unbase64 Transformation: To decode Base64-encoded data, you can use the unba...

  • 0 kudos
stevenayers-bge
by New Contributor II
  • 70 Views
  • 1 replies
  • 0 kudos

Autoloader: Read old version of file. Read modification time is X, latest modification time is X

I'm recieving this error from autoloader. It seems to be stuck on this one file. I don't care when it was read and last modified, I just want to ingest it. Any ideas?java.io.IOException: Read old version of file s3a://<file-path>.json. Read modificat...

  • 70 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @stevenayers-bge, The error message indicates that the file you’re trying to read is an old version, and there’s a discrepancy between the read modification time and the latest modification time. Let’s explore some potential solutions based on ...

  • 0 kudos
jainshasha
by New Contributor
  • 54 Views
  • 1 replies
  • 0 kudos

Job Cluster in Databricks workflow

Hi,I have configured 20 different workflows in Databricks. All of them configured with job cluster with different name. All 20 workfldows scheduled to run at same time. But even configuring different job cluster in all of them they run sequentially w...

  • 54 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @jainshasha, Running multiple workflows in parallel with their own job clusters in Databricks can be achieved by following the right configuration. Let’s explore some options: Shared Job Clusters: To optimize resource usage with jobs that orch...

  • 0 kudos
LeoGaller
by New Contributor
  • 89 Views
  • 1 replies
  • 0 kudos

What are the options for "spark_conf.spark.databricks.cluster.profile"?

Hey guys, I'm trying to find what are the options we can pass to spark_conf.spark.databricks.cluster.profileI know looking around that some of the available configs are singleNode and serverless, but there are others?Where is the documentation of it?...

  • 89 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @LeoGaller , The spark_conf.spark.databricks.cluster.profile configuration in Databricks allows you to specify the profile for a cluster.   Let’s explore the available options and where you can find the documentation. Available Profiles: Sing...

  • 0 kudos
Red1
by New Contributor III
  • 920 Views
  • 8 replies
  • 2 kudos

Autoingest not working with Unity Catalog in DLT pipeline

Hey Everyone,I've built a very simple pipeline with a single DLT using auto ingest, and it works, provided I don't specify the output location. When I build the same pipeline but set UC as the output location, it fails when setting up S3 notification...

  • 920 Views
  • 8 replies
  • 2 kudos
Latest Reply
Red1
New Contributor III
  • 2 kudos

Hey @Babu_Krishnan I was! I had to reach out to my Databricks support engineer directly and the resolution was to add "cloudfiles.awsAccessKey" and "cloudfiles.awsSecretKey" to the params as in the screenshot below (apologies, i don't know why the sc...

  • 2 kudos
7 More Replies
Labels
Top Kudoed Authors