Most python examples show the structure of the foreachBatch method as:def foreachBatchFunc(batchDF, batchId):
batchDF.createOrReplaceTempView('viewName')
(
batchDF
._jdf.sparkSession()
.sql(
...
Using Azure Databricks:I can create a DLT table in python usingimport dlt
import pyspark.sql.functions as fn
from pyspark.sql.types import StringType
@dlt.table(
name = "<<landingTable>>",
path = "<<storage path>>",
comment = "<< descri...
Just found a solution...Need to convert the Java Dataframe (jdf) to a DataFramefrom pyspark import sql
def batchFunc(batchDF, batchId):
batchDF.createOrReplaceTempView('viewName')
sparkSession = batchDF._jdf.sparkSession()
resJdf = sparkSes...
No I didn't. In fact I had to stop using DLT when another issue came up around performing a partial /streaming increment of a large platinum aggregation table. I ended up going back to using:Kafka reader (see Consume Data From Apache Kafka )Stream...