thank you for the insides. I will probably be using TEMPORARY STREAMING LIVE VIEW then. I am parametrizing spark dataframes, so I can do something like this:spark.sql("SELECT * FROM {df}", df=my_df)
Hey adriennn, thanks for the long answer.So 1 is just not an optoin. the goal is to do a whole migration.Option 2 would mean that all those temporary tables/views I make to create a gold table would become permanent through the use of LIVE.temp1, wou...
The problem I want to solve:On-premise I have a ton of complicated SQL code in my gold layer using temp tables for intermediate results. No way around that.I want to migrate the gold layer to DLT. I thought the best way to do this, is to use paramete...
Hello Alberto,thanks for the quick answer! Actually I want to pass a dataframe to the function, like: @Dlt.table(
name="test"
)
def create_table():
test_df = spark.createDataFrame(["9","10","11","13"], "string").toDF("id")
final_df = spark.s...
Did any one make paramtriezed spark.sql() work with Delta Live Tables? Using DLT pipelines, I get the error:"TypeError: _override_spark_functions.<locals>._dlt_sql_fn() got an unexpected keyword argument 'bound1'"I checked, that the cluster the pipel...