cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Kayl669
by New Contributor III
  • 3681 Views
  • 5 replies
  • 0 kudos

SQL code against tables with '>' in headers suddenly failing?

Just want to post this issue we're experiencing here in case other people are facing something similar. Below is the wording of the support ticket request I've raised:SQL code that has been working is suddenly failing due to syntax errors today. Ther...

  • 3681 Views
  • 5 replies
  • 0 kudos
Latest Reply
Kayl669
New Contributor III
  • 0 kudos

The point that we've got to with this is that MS Support / Databricks have acknowledged that they did something and are working on a fix. "The issue occurred due to the regression in the recent DBR maintenance release...Our engineering team is workin...

  • 0 kudos
4 More Replies
Red1
by New Contributor III
  • 4685 Views
  • 6 replies
  • 2 kudos

Autoingest not working with Unity Catalog in DLT pipeline

Hey Everyone,I've built a very simple pipeline with a single DLT using auto ingest, and it works, provided I don't specify the output location. When I build the same pipeline but set UC as the output location, it fails when setting up S3 notification...

  • 4685 Views
  • 6 replies
  • 2 kudos
Latest Reply
Red1
New Contributor III
  • 2 kudos

Hey @Babu_Krishnan I was! I had to reach out to my Databricks support engineer directly and the resolution was to add "cloudfiles.awsAccessKey" and "cloudfiles.awsSecretKey" to the params as in the screenshot below (apologies, i don't know why the sc...

  • 2 kudos
5 More Replies
Mado
by Valued Contributor II
  • 16894 Views
  • 4 replies
  • 3 kudos

Resolved! Using "Select Expr" and "Stack" to Unpivot PySpark DataFrame doesn't produce expected results

I am trying to unpivot a PySpark DataFrame, but I don't get the correct results.Sample dataset:# Prepare Data data = [("Spain", 101, 201, 301), \ ("Taiwan", 102, 202, 302), \ ("Italy", 103, 203, 303), \ ("China", 104, 204, 304...

image image
  • 16894 Views
  • 4 replies
  • 3 kudos
Latest Reply
lukeoz
New Contributor III
  • 3 kudos

You can also use backticks around the column names that would otherwise be recognised as numbers.from pyspark.sql import functions as F   unpivotExpr = "stack(3, '2018', `2018`, '2019', `2019`, '2020', `2020`) as (Year, CPI)" unPivotDF = df.select("C...

  • 3 kudos
3 More Replies
6502
by New Contributor III
  • 2555 Views
  • 1 replies
  • 0 kudos

Delete on streaming table and starting startingVersion

I deleted for mistake some records from a streaming table, and of course, the streaming job stopped working. So I restored the table at the version before the delete was done, and attempted to restart the job using the startingVersion to the new vers...

  • 2555 Views
  • 1 replies
  • 0 kudos
Latest Reply
raphaelblg
Databricks Employee
  • 0 kudos

Hello @6502, It appears you've used the `startingVersion` parameter in your streaming query, which causes the stream to begin processing data from the version prior to the DELETE operation version. However, the DELETE operation will still be processe...

  • 0 kudos
Erik_L
by Contributor II
  • 1153 Views
  • 0 replies
  • 0 kudos

BUG: Unity Catalog kills UDF

We have UDFs in a few locations and today we noticed they died in performance. This seems to be caused by Unity Catalog.Test environment 1:Databricks Runtime Environment: 14.3 / 15.1Compute: 1 master, 4 nodesPolicy: UnrestrictedAccess Mode: SharedTes...

  • 1153 Views
  • 0 replies
  • 0 kudos
nilton
by New Contributor II
  • 1641 Views
  • 2 replies
  • 0 kudos

Query table based on table_name from information_schema

Hi,I have one table that changes the name every 60 days. The name simple increases the number version, for example:* Firtst 60 days: table_name_v1. After 60 days: table_name_v2 and so on.What i want is to query the table wich name returned in the que...

  • 1641 Views
  • 2 replies
  • 0 kudos
Latest Reply
radothede
Contributor III
  • 0 kudos

The simpliest way would be propably using spark.sql%py tbl_name = 'table_v1' df = spark.sql(f'select * from {tbl_name}') display(df) From there, You can simply create temporary view:%py df.createOrReplaceTempView('table_act')and query it using SQL st...

  • 0 kudos
1 More Replies
rt-slowth
by Contributor
  • 3500 Views
  • 5 replies
  • 2 kudos

AutoLoader File notification mode Configuration with AWS

   from pyspark.sql import functions as F from pyspark.sql import types as T from pyspark.sql import DataFrame, Column from pyspark.sql.types import Row import dlt S3_PATH = 's3://datalake-lab/XXXXX/' S3_SCHEMA = 's3://datalake-lab/XXXXX/schemas/' ...

  • 3500 Views
  • 5 replies
  • 2 kudos
Latest Reply
djhs
New Contributor III
  • 2 kudos

Was this resolved? I run into the same issue

  • 2 kudos
4 More Replies
185369
by New Contributor II
  • 3363 Views
  • 4 replies
  • 1 kudos

Resolved! DLT with UC Access Denied sqs

I am going to use the newly released DLT with UC.But it keeps getting access denied. As I keep tracking the reasons, it seems that an account. ID other than my account ID or Databricks account ID is being requested.I cannot use '*' in principal attri...

  • 3363 Views
  • 4 replies
  • 1 kudos
Latest Reply
Priyag1
Honored Contributor II
  • 1 kudos

Every service on AWS, an SQS queue, and all the other services in your stack using that queue will be configured with minimal permissions, leading to access issues. So, make sure you get your IAM policies set up correctly before deploying to producti...

  • 1 kudos
3 More Replies
QuantumFries
by New Contributor II
  • 5671 Views
  • 4 replies
  • 3 kudos

Change {{job.start_time.[iso_date]}} Timezone

I am trying to schedule some jobs using workflows and leveraging dynamic variables. One caveat is that when I try to use {{job.start_time.[iso_date]}} it seems to be defaulted to UTC, is there a way to change it?

  • 5671 Views
  • 4 replies
  • 3 kudos
Latest Reply
artsheiko
Databricks Employee
  • 3 kudos

Hi, all the dynamic values are in UTC (documentation). Maybe you can use the code like the one presented below + pass the variables between tasks (see Share information between tasks in a Databricks job) ? %python from datetime import datetime, timed...

  • 3 kudos
3 More Replies
Abbe
by New Contributor II
  • 2806 Views
  • 2 replies
  • 0 kudos

Update data type of a column within a table that has a GENERATED ALWAYS AS IDENTITY-column

I want to cast the data type of a column "X" in a table "A" where column "ID" is defined as GENERATED ALWAYS AS IDENTITY. Databricks refer to overwrite to achieve this: https://docs.databricks.com/delta/update-schema.htmlThe following operation:(spar...

  • 2806 Views
  • 2 replies
  • 0 kudos
Latest Reply
RajuBolla
New Contributor II
  • 0 kudos

Update is not working but delete is when i changed to DEFAULT property AnalysisException: UPDATE on IDENTITY column "XXXX_ID" is not supported.

  • 0 kudos
1 More Replies
vinayaka_pallak
by New Contributor
  • 1504 Views
  • 0 replies
  • 0 kudos

Pytest on Notebook

 I am currently exploring testing methodologies for Databricks notebooks and would like to inquire whether it's possible to write pytest tests for notebooks that contain code not encapsulated within functions or classes.***********************a = 4b ...

  • 1504 Views
  • 0 replies
  • 0 kudos
Phani1
by Valued Contributor II
  • 5105 Views
  • 4 replies
  • 0 kudos

Parallel execution of SQL cell in Databricks Notebooks

Hi Team,Please provide guidance on enabling SQL cells  parallel execution in a notebook containing multiple SQL cells. Currently, when we execute notebook and all the SQL cells they run sequentially. I would appreciate assistance on how to execute th...

  • 5105 Views
  • 4 replies
  • 0 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 0 kudos

Hi @Phani1 Yes you can achieve this scenario with the help of Databricks Workflow jobs where you can create task and dependencies for each other. 

  • 0 kudos
3 More Replies
rt-slowth
by Contributor
  • 3111 Views
  • 5 replies
  • 0 kudos

why the userIdentity is anonymous?

Do you know why the userIdentity is anonymous in AWS Cloudtail's logs even though I have specified an instance profile?

  • 3111 Views
  • 5 replies
  • 0 kudos
Latest Reply
CharlesReily
New Contributor III
  • 0 kudos

If you're using AssumeRole to switch roles, make sure that the assumed role session is being used correctly. The Security Token Service (STS) is responsible for issuing temporary security credentials when assuming roles. Ensure that your EC2 instance...

  • 0 kudos
4 More Replies
bamhn
by New Contributor II
  • 7580 Views
  • 3 replies
  • 2 kudos

My cluster can't access any tables in data catalogs

My goal is to have table access control in the data science and engineering workspace. So I enabled access control to my cluster using this config "spark.databricks.acl.dfAclsEnabled": "true" and my cluster is shown as Table ACLs enabled now (shield ...

image.png image
  • 7580 Views
  • 3 replies
  • 2 kudos
Latest Reply
Karthik_Venu
New Contributor II
  • 2 kudos

Here is my use case: https://community.databricks.com/t5/data-engineering/structured-streaming-using-delta-as-source-and-delta-as-sink-and/td-p/67825And I get this error: "py4j.security.Py4JSecurityException: Method public org.apache.spark.sql.Datase...

  • 2 kudos
2 More Replies
Karthik_Venu
by New Contributor II
  • 858 Views
  • 1 replies
  • 0 kudos

Structured Streaming using Delta as Source and Delta as Sink and Delta tables are under unity catalo

Hello Everyone,Here is my use case.1. My source table (bronze delta table) is under unity catalog and is a transaction (Insert/Update) table.2. My target table (silver delta table) is also under unity catalog.3. On daily basis I need to ingest the in...

  • 858 Views
  • 1 replies
  • 0 kudos
Latest Reply
Karthik_Venu
New Contributor II
  • 0 kudos

I came across this article : readStream() is not whitelisted error when running a query - Databricksit states the solution as " You should use a cluster that does not have table access control enabled for streaming queries."However, the source and ta...

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels