cancel
Showing results for 
Search instead for 
Did you mean: 
rcostanza
New Contributor III
since ‎05-31-2022
Thursday

User Stats

  • 11 Posts
  • 0 Solutions
  • 2 Kudos given
  • 2 Kudos received

User Activity

In a DLT pipeline I have a bronze table that ingest files using Autoloader, and a derived silver table that, for this example, just stores the number of rows for each file ingested into bronze. The basic code example: import dlt from pyspark.sql impo...
What I'm trying to achieve: ingest files into bronze tables with Autoloader, then produce Kafka messages for each file ingested using a DLT sink.The issue: latency between file ingested and message produced get exponentially higher the more tables ar...
I have a small (under 20 tables, all streaming) DLT pipeline running in triggered mode, scheduled every 15min during the workday.  For development I've set `pipelines.clusterShutdown.delay` to avoid having to start a cluster every update.I've noticed...
In the pricing page for Lakeflow Declarative Pipelines (formerly DLT), for serverless it shows a single cost of $0.35/DBU for both standard and performance optimized. But in the feature table below, it says standard is "Up to 70% cheaper than running...
I have a notebook where at the beginning I load several dataframes and cache them using localCheckpoint(). I run this notebook using an all-purpose cluster with autoscaling enabled, with a mininum of 1 worker and maximum 2.The cluster often autoscale...
Kudos from
Kudos given to