Hi!
My team is currently trying out Delta Live Tables (DLT) for managing our ETL pipelines. An issue we're encountering is that we have notebooks that transform data using Spark SQL. We include these in a DLT pipeline, and we want to both run the pipeline as a whole, and go into a specific notebook, run that and be able to see the materialized views that we create (we use dlt.table()), without having to change the schemas in the FROM statements,
For example, say we have two notebooks, Customer and Sales in a DLT pipeline.
In Customer we have the following code :
@Dlt.table()
def CUSTOMER_TABLE():
df = spark.sql("""
SELECT
Customer_KEY AS CUSTOMER_SK,
Name AS CUSTOMER_NAME
FROM CUSTOMER.EXTRACT_CUSTOMER
""")
return df
CUSTOMER_TABLE gets saved to the schema DIMENSIONS in Unity Catalog.
In Sales we have:
@Dlt.table()
def SALES_TABLE():
df = spark.sql("""
SELECT
Item AS ITEM_NAME,
Amount AS SALES_AMOUNT
C.CUSTOMER_NAME
FROM SALES.EXTRACT_SALES
LEFT JOIN MAIN.DIMENSIONS.CUSTOMER_TABLE AS C
""")
return df
We can run the notebook Sales and see the results from SALES_TABLE via display(df). However, if we trigger the DLT pipeline, we get the following warning:
"
Your query 'SALES_TABLE' reads from '<catalog>.<schema>.CUSTOMER_TABLE' but must read from 'LIVE.CUSTOMER_TABLE' instead.
Always use the LIVE keyword when referencing tables from the same pipeline so that DLT can track the dependencies in the pipeline.
"
If we then change from MAIN.DIMENSIONS.CUSTOMER_TABLE --> LIVE.CUSTOMER_TABLE, we can run the DLT pipeline. But running the Sales notebook
itself doesn't work, we get an error that says:
"
Failed to read dataset 'CUSTOMER_TABLE'. Dataset is not defined in the pipeline.
"
Right now we have a workaround that dynamically changes the schema names with the help of a parameter in the DLT pipeline
from (Settings --> Advanced --> Configuration).
Is there a better solution to this? It feels natural to be able to run DLT pipelines and the individual notebooks without
having to change schema names.