cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

PriyaAnanthram
by Contributor III
  • 2048 Views
  • 6 replies
  • 0 kudos

Resolved! change data feed on delta live tables

I have a delta live table where I am reading cdc data and merging this data in silver using apply changes. In silver can I find out what all data has changed since the last run similar to change data feed table_changes?

  • 2048 Views
  • 6 replies
  • 0 kudos
Latest Reply
PriyaAnanthram
Contributor III
  • 0 kudos

I also have a requirment where i write to a live table (materialized view) and have cdf enabled i want to see the changes but here to i see overwrites happening after dlt pipeline runs

  • 0 kudos
5 More Replies
Mike_016978
by New Contributor II
  • 4145 Views
  • 3 replies
  • 3 kudos

Resolved! What are differences between Materialized view and Streaming table in delta live table?

Hi,I was wondering that what are differences between Materialized view and Streaming table? which one should I use when I extract data from bronze table to silver table since I found that both CREATE LIVE TABLE and CREATE STREAMING LIVE TABLE could a...

  • 4145 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Mike Chen​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your feedback wi...

  • 3 kudos
2 More Replies
YSF
by New Contributor III
  • 625 Views
  • 1 replies
  • 0 kudos

Delta Live Table & Autoloader adding a non-existent column

I'm trying to setup autoloader to read some csv files. I tried with both autoloader with the DLT decorator as well as just autoloader by itself. The first column of the data is called "run_id", when I do a spark.read.csv() directly on the file it com...

  • 625 Views
  • 1 replies
  • 0 kudos
Latest Reply
Rishabh264
Honored Contributor II
  • 0 kudos

can you attach the exact output so that I can have a look on that .

  • 0 kudos
arw1070
by New Contributor II
  • 983 Views
  • 2 replies
  • 0 kudos

Downstream delta live table is unable to read data frame from upstream table

I have been trying to work on implementing delta live tables to a pre-existing workflow. Currently trying to create two tables: appointments_raw and notes_raw, where notes_raw is "downstream" of appointments_raw. Following this as a reference, I'm at...

image.png
  • 983 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Anna Wuest​ : Could you please send me the code snippet here? Thanks.

  • 0 kudos
1 More Replies
Aj2
by New Contributor III
  • 8950 Views
  • 1 replies
  • 4 kudos
  • 8950 Views
  • 1 replies
  • 4 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 4 kudos

A live table or view always reflects the results of the query that defines it, including when the query defining the table or view is updated, or an input data source is updated. Like a traditional materialized view, a live table or view may be entir...

  • 4 kudos
logan0015
by Contributor
  • 527 Views
  • 1 replies
  • 3 kudos

How to move the "__apply changes_storage_mytablename" when creating a streaming live table?

As the title suggests, whenever I create a streaming live table it creates a __apply_changes_storage_"mytablename" section in the database on databricks. Is there a way to specify a different cloud location for these files?

  • 527 Views
  • 1 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Logan Nicol​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else bricksters will get back to you soon. Thanks

  • 3 kudos
PrebenOlsen
by New Contributor III
  • 1425 Views
  • 4 replies
  • 1 kudos

GroupBy in delta live tables fails with error "RuntimeError: Query function must return either a Spark or Koalas DataFrame"

I have a delta live table that I'm trying to run GroupBy on, but getting an error: "RuntimeError: Query function must return either a Spark or Koalas DataFrame". Here is my code:@dlt.table def groups_hierarchy():   df = dlt.read_stream("groups_h...

  • 1425 Views
  • 4 replies
  • 1 kudos
Latest Reply
Vidula
Honored Contributor
  • 1 kudos

Hi @Preben Olsen​ Does @Debayan Mukherjee​  response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?We'd love to hear from you.Thanks!

  • 1 kudos
3 More Replies
aksharamaham
by New Contributor
  • 977 Views
  • 2 replies
  • 0 kudos

Resolved! Delta Live Table - How to get details of which records were excluded in Quality Checks?

I've been experimenting with DLT and it works well. I'd like to understand where can I see details of which records didn't meet the quality critera?

  • 977 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Paresh J​ , The event log captures data quality metrics based on the expectations defined in your pipelines. (Source) 

  • 0 kudos
1 More Replies
Digan_Parikh
by Valued Contributor
  • 844 Views
  • 1 replies
  • 0 kudos

Resolved! Delta Live Table - landing database?

Where do you specify what database the DLT tables land in?

  • 844 Views
  • 1 replies
  • 0 kudos
Latest Reply
Digan_Parikh
Valued Contributor
  • 0 kudos

The target key, when creating the pipeline specifies the database that the tables get published to. Documented here - https://docs.databricks.com/data-engineering/delta-live-tables/delta-live-tables-user-guide.html#publish-tables

  • 0 kudos
Labels