cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

PearceR
by New Contributor III
  • 11282 Views
  • 4 replies
  • 1 kudos

Resolved! custom upsert for delta live tables apply_changes()

Hello community :).I am currently implementing some pipelines using DLT. They are working great for my medalion architecture for landed json in bronze -> silver (using apply_changes) then materialized gold views ontop.However, I am attempting to crea...

  • 11282 Views
  • 4 replies
  • 1 kudos
Latest Reply
Harsh141220
New Contributor II
  • 1 kudos

Is it possible to have custom upserts for streaming tables in delta live tables?Im getting the error:pyspark.errors.exceptions.captured.AnalysisException: `blusmart_poc.information_schema.sessions` is not a Delta table.

  • 1 kudos
3 More Replies
Valentin1
by New Contributor III
  • 6173 Views
  • 6 replies
  • 3 kudos

Delta Live Tables Incremental Batch Loads & Failure Recovery

Hello Databricks community,I'm working on a pipeline and would like to implement a common use case using Delta Live Tables. The pipeline should include the following steps:Incrementally load data from Table A as a batch.If the pipeline has previously...

  • 6173 Views
  • 6 replies
  • 3 kudos
Latest Reply
lprevost
Contributor
  • 3 kudos

I totally agree that this is a gap in the Databricks solution.  This gap exists between a static read and real time streaming.   My problem (and suspect there are many use cases) is that I have slowly changing data coming into structured folders via ...

  • 3 kudos
5 More Replies
sarguido
by New Contributor II
  • 2869 Views
  • 5 replies
  • 2 kudos

Delta Live Tables: bulk import of historical data?

Hello! I'm very new to working with Delta Live Tables and I'm having some issues. I'm trying to import a large amount of historical data into DLT. However letting the DLT pipeline run forever doesn't work with the database we're trying to import from...

  • 2869 Views
  • 5 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Sarah Guido​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 2 kudos
4 More Replies
jfvizoso
by New Contributor II
  • 7733 Views
  • 5 replies
  • 0 kudos

Can I pass parameters to a Delta Live Table pipeline at running time?

I need to execute a DLT pipeline from a Job, and I would like to know if there is any way of passing a parameter. I know you can have settings in the pipeline that you use in the DLT notebook, but it seems you can only assign values to them when crea...

  • 7733 Views
  • 5 replies
  • 0 kudos
Latest Reply
lprevost
Contributor
  • 0 kudos

This seems to be the key to this question:parameterize for dlt  My understanding of this is that you can add the parameter either in the DLT settings UI via Advanced Config/Add Configuration, key, value dialog.   Or via the corresponding pipeline set...

  • 0 kudos
4 More Replies
Phani1
by Valued Contributor II
  • 6453 Views
  • 6 replies
  • 0 kudos

Data Quality in Databricks

Hi Databricks Team, would like to implement data quality rules in Databricks, apart from DLT do we have any standard approach to perform/ apply data quality rules on bronze layer before further proceeding to silver and gold layer.

  • 6453 Views
  • 6 replies
  • 0 kudos
Latest Reply
joarobles
New Contributor III
  • 0 kudos

Looks nice! However I don't see Databricks support in the docs

  • 0 kudos
5 More Replies
amartinez
by New Contributor III
  • 4070 Views
  • 6 replies
  • 5 kudos

Workaround for GraphFrames not working on Delta Live Table?

According to this page, the GraphFrames package is included in the databricks runtime since at least 11.0. However trying to run a connected components algorithm inside a delta live table notebook yields the error java.lang.ClassNotFoundException: or...

  • 4070 Views
  • 6 replies
  • 5 kudos
Latest Reply
lprevost
Contributor
  • 5 kudos

I'm also trying to use GraphFrames inside a DLT pipeline.   I get an error that graphframes not installed in the cluster.   i"m using it successfully in test notebooks using the ML version of the cluster.  Is there a way to use this inside a DLT job?

  • 5 kudos
5 More Replies
Yash_542965
by New Contributor II
  • 1239 Views
  • 1 replies
  • 0 kudos

DLT aggregation problem

I'm utilizing SQL to perform aggregation operations within a gold layer of a DLT pipeline. However, I'm encountering an error when running the pipeline while attempting to return a data frame using spark.sql.Could anyone please assist me with the SQL...

  • 1239 Views
  • 1 replies
  • 0 kudos
Latest Reply
lucasrocha
Contributor II
  • 0 kudos

Hello @Yash_542965 , I hope this message finds you well. Could you please share a sample of code you are using so that we can check it further? Best regards,Lucas Rocha

  • 0 kudos
User16752244127
by Contributor
  • 793 Views
  • 1 replies
  • 0 kudos
  • 793 Views
  • 1 replies
  • 0 kudos
Latest Reply
lucasrocha
Contributor II
  • 0 kudos

Hello @User16752244127 , I hope this message finds you well. Delta Live Tables supports loading data from any data source supported by Databricks. You can find the datasources supported here Connect to data sources, and JDBC is one of them. You can a...

  • 0 kudos
kskistad
by New Contributor III
  • 4379 Views
  • 3 replies
  • 4 kudos

Resolved! Streaming Delta Live Tables

I'm a little confused about how streaming works with DLT. My first questions is what is the difference in behavior if you set the pipeline mode to "Continuous" but in your notebook you don't use the "streaming" prefix on table statements, and simila...

  • 4379 Views
  • 3 replies
  • 4 kudos
Latest Reply
Harsh141220
New Contributor II
  • 4 kudos

Is it possible to have custom upserts in streaming tables in a delta live tables pipeline?Use case: I am trying to maintain a valid session based on timestamp column and want to upsert to the target table.Tried going through the documentations but dl...

  • 4 kudos
2 More Replies
daz
by New Contributor III
  • 5725 Views
  • 9 replies
  • 3 kudos

DLT managed by non-existent pipeline

I am building out a new DLT pipeline and have since had to rebuild it from scratch. Having deleted the old pipeline and constructed a new one I now get this error:Table 'X' is already managed by pipeline 'Y'. As I only have the one pipeline how would...

  • 5725 Views
  • 9 replies
  • 3 kudos
Latest Reply
Shinaider777
New Contributor II
  • 3 kudos

rename your function from @Dlt.table, for exemple:@Dlt.table(    comment="exemple",    table_properties={"exemple": "exemple"},    partition_cols=["a", "b", "c"])def modify_this_name():

  • 3 kudos
8 More Replies
isaac_gritz
by Valued Contributor II
  • 7717 Views
  • 1 replies
  • 2 kudos

Change Data Capture with Databricks

How to leverage Change Data Capture (CDC) from your databases to DatabricksChange Data Capture allows you to ingest and process only changed records from database systems to dramatically reduce data processing costs and enable real-time use cases suc...

  • 7717 Views
  • 1 replies
  • 2 kudos
Latest Reply
prasad95
New Contributor III
  • 2 kudos

Hi, @isaac_gritz can you provide any reference resource to achieve the AWS DynamoDB CDC to Delta Tables.Thank You,

  • 2 kudos
User16826992185
by New Contributor II
  • 8094 Views
  • 2 replies
  • 3 kudos

Databricks Auto-Loader vs. Delta Live Tables

What is the difference between Databricks Auto-Loader and Delta Live Tables? Both seem to manage ETL for you but I'm confused on where to use one vs. the other.

  • 8094 Views
  • 2 replies
  • 3 kudos
Latest Reply
SteveL
New Contributor II
  • 3 kudos

You say "...__would__ be a piece..." and "...DLT __would__ pick up...".Is DLT built upon AL?

  • 3 kudos
1 More Replies
Murthy1
by Contributor II
  • 4041 Views
  • 2 replies
  • 0 kudos

How can we use existing all purpose cluster for a DLT pipeline?

I understand that DLT is a separate job compute but I would like to use an existing all purpose cluster for the DLT pipeline. Is there a way I can achieve this?

  • 4041 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?This...

  • 0 kudos
1 More Replies
famous_jt33
by New Contributor
  • 1334 Views
  • 2 replies
  • 2 kudos

SQL UDFs for DLT pipelines

I am trying to implement a UDF for a DLT pipeline. I have seen the documentation stating that it is possible but I am getting an error after adding an SQL UDF to a cell in the notebook attached to the pipeline. The aim is to have the UDF in a separat...

  • 1334 Views
  • 2 replies
  • 2 kudos
Latest Reply
6502
New Contributor III
  • 2 kudos

You can't. The SQL support on DLT pipeline cluster is limited compared to a normal notebook. You can still define a UDF in Python using, of course, a Python notebook. In this case, you can use the spark.sql() function to execute your original SQL cod...

  • 2 kudos
1 More Replies
Srikanth_Gupta_
by Valued Contributor
  • 4155 Views
  • 2 replies
  • 0 kudos
  • 4155 Views
  • 2 replies
  • 0 kudos
Latest Reply
BilalAslamDbrx
Honored Contributor III
  • 0 kudos

I'll try to answer this in the simplest possible way 1. Spark is an imperative programming framework. You tell it what it to do, it does it. DLT is declarative - you describe what you want the datasets to be (i.e. the transforms), and it takes care ...

  • 0 kudos
1 More Replies
Labels