Hi all,
I hope you could help me to figure out what I am missing.
I'm trying to do a simple thing. To read the data from the data ingestion zone (csv files saved to Azure Storage Account) using the Delta Live Tables pipeline and share the resulting table to another Databricks workspace using Delta Sharing.
Here is the code that describes the DLT pipeline.
import dlt
from pyspark.sql.functions import *
from pyspark.sql.types import *
raw_path = "/mnt/ingestion/sensors-readings"
@dlt.table(
comment = "Contains data received from sensors API"
)
def sensors_raw():
# Auto-loader to load newly ingested files only.
df = spark.readStream.format("cloudFiles") \
.option("cloudFiles.format", "csv") \
.option("header",True) \
.load(raw_path)
return (df)
It runs successfully and the table is added to the target schema.
But when I try to create a share the table is not displayed.
The above tables available for sharing are created from the notebook as df.write.saveAsTable().
When reading available documentation I've seen that the STREAMING_TABLE can't be shared via Delta Sharing.
Maybe I'm missing some settings? It would be great if you could help me to figure it out.
Thanks.