cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

vijaykumarbotla
by New Contributor III
  • 3872 Views
  • 5 replies
  • 1 kudos

Resolved! Getting error : Analysis Exception : olumn Is There a PO#17748 are ambiguous. It's probably because you joined several Datasets together, and some of these Datasets are the same. This column points to one of the Datasets but Spark.

AnalysisException: Column Is There a PO#17748 are ambiguous. It's probably because you joined several Datasets together, and some of these Datasets are the same. This column points to one of the Datasets but Spark is unable to figure out which one. ...

  • 3872 Views
  • 5 replies
  • 1 kudos
Latest Reply
vijaykumarbotla
New Contributor III
  • 1 kudos

Hi All,the solution for this problem is very strange.this has caused due to the version of the Databricks runtime.We are using Runtime version 7.0 with Apache Spark 3.0.0 version.In PRD we are using Runtime version 11.3LTS with Apache Spark 3.3.0 ver...

  • 1 kudos
4 More Replies
darioAnt
by New Contributor II
  • 1606 Views
  • 1 replies
  • 2 kudos

Filtering delta table by CONCAT of a partition column and a non-partition one

Hi,I know how filtering a delta table on a partition column is a very powerful time-saving approach, but what if this column appears as a CONCAT in the where-clause?I explain my case: I have a delta table with only one partition column, say called co...

  • 1606 Views
  • 1 replies
  • 2 kudos
Latest Reply
darioAnt
New Contributor II
  • 2 kudos

I did myself a test and the answer is no:with a Concat filter, spark sql does not know I am using a partition-based column, so it scan all the table.

  • 2 kudos
Altay
by New Contributor II
  • 570 Views
  • 0 replies
  • 0 kudos

Delta merge drops cached variables

Hi Everyone,I have an ingestion script where I use the delta merge to update and append newly incoming data in dataframe format to an existing delta table.I am experiencing an issue where all the variables that have been used previously loose their d...

  • 570 Views
  • 0 replies
  • 0 kudos
konda1
by New Contributor
  • 833 Views
  • 0 replies
  • 0 kudos

Getting Executor lost due to stage failure error on writing data frame to a delta table or any file like parquet or csv or avro

We are working on multiline nested ( multilevel).The file is read and flattened using pyspark and the data frame is showing data using display() method. when saving the same dataframe it is giving executor lost failure error.for some files it is givi...

  • 833 Views
  • 0 replies
  • 0 kudos
martindlarsson
by New Contributor III
  • 841 Views
  • 0 replies
  • 0 kudos

Autoloader and deletion vectors (Predictive IO)

We are looking into enabling Predictive IO on our delta tables. In the ingest process we are using autoloader and I am wondering if autoloader will gett a flag to enable deletion vectors at table creation? Deletion vectors is a requirement for Predic...

  • 841 Views
  • 0 replies
  • 0 kudos
eyalo
by New Contributor II
  • 1008 Views
  • 0 replies
  • 0 kudos

Ingest from FTP server doesn't work

Hi,I am trying to connect my FTP server and store the files to a dataframe with the following code:%pip install ftputilfrom ftputil import FTPHostHost = "92.118.67.49"Login = "StrideNBM-DF_BO"Passwd = "Sdf123456"ftp_dir = "/dwh-reports/"with FTPHost(...

  • 1008 Views
  • 0 replies
  • 0 kudos
ros
by New Contributor III
  • 2258 Views
  • 2 replies
  • 3 kudos

Apache Hudi Table creation using hudi maven library

I installed hudi maven library org.apache.hudi:hudi-spark3.3-bundle_2.12:0.13.0 in Dbricks Runtime Ver : 12.2 LTS (includes Apache Spark 3.3.2, Scala 2.12) with spark config :spark.sql.catalog.spark_catalog org.apache.spark.sql.hudi.catalog.HoodieCat...

  • 2258 Views
  • 2 replies
  • 3 kudos
Latest Reply
ros
New Contributor III
  • 3 kudos

@Shanmugavel Chandrakasu​ %sql create table hudi_cow_pt_tbl ( id bigint, name string, ts bigint, dt string, hh string ) using hudi tblproperties ( type = 'cow', primaryKey = 'id', preCombineField = 'ts' ) partitioned by (dt, hh) location '/mnt/data/h...

  • 3 kudos
1 More Replies
Anonymous
by Not applicable
  • 739 Views
  • 0 replies
  • 2 kudos

 Hello Everyone, I am thrilled to announce that we have our 6th winner for the raffle contest -@Bolanle Adesanya​ . Please join me in congratulating h...

 Hello Everyone,I am thrilled to announce that we have our 6th winner for the raffle contest -@Bolanle Adesanya​ . Please join me in congratulating her on this remarkable achievement!Your dedication and hard work have paid off, and we are delighted t...

winner7
  • 739 Views
  • 0 replies
  • 2 kudos
PawelK
by New Contributor II
  • 3699 Views
  • 4 replies
  • 1 kudos

Is it possible to create "Notification destinations"/"Alert destinations" through API or Pulumi/Terraform?

Hello, I'm looking for a way of defining notification destination using API or Pulumi/Terraform providers. However I cannot find it anywhere. Could you please help and advice if i'm missing something or it's not available at the moment?And If it's no...

  • 3699 Views
  • 4 replies
  • 1 kudos
Latest Reply
JordanYaker
Contributor
  • 1 kudos

This issue seems to point to the lack of a public API being the culprit behind the lack of a resource for Terraform.

  • 1 kudos
3 More Replies
JordanYaker
by Contributor
  • 1127 Views
  • 0 replies
  • 0 kudos

Integration options for Databricks Jobs and DataDog?

I know that there is already the Databricks (technically Spark) integration for DataDog. Unfortunately, that integration only covers the cluster execution itself and that means only Cluster Metrics and Spark Jobs and Tasks. I'm looking for somethin...

  • 1127 Views
  • 0 replies
  • 0 kudos
Edwin
by New Contributor II
  • 784 Views
  • 0 replies
  • 1 kudos

Unable to load data from Redshift

I've been trying to connect to RedShift following Databrick's documentation and validated that I'm using runtime version 11.3 on my cluster and that I have read/write privileges on the tempdir bucket. But, I'm unable to load data from RedShift to a S...

  • 784 Views
  • 0 replies
  • 1 kudos
AEM
by New Contributor
  • 992 Views
  • 0 replies
  • 0 kudos

How to set charset encoding in SQL view?

Hi! I have a SQL query that has a where-clause that checks a string attribute not being equal to e.g. 'シミュレータに接続されていません' (Japanese). This works fine when running the query in SQL Editor ad hoc, but creating a view with the same query, the special cha...

  • 992 Views
  • 0 replies
  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels