cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Seeing results of materialized views while running notebooks

AndreasB
New Contributor II

Hi!

My team is currently trying out Delta Live Tables (DLT) for managing our ETL pipelines. An issue we're encountering is that we have notebooks that transform data using Spark SQL. We include these in a DLT pipeline, and we want to both run the pipeline as a whole, and go into a specific notebook, run that and be able to see the materialized views that we create (we use dlt.table()), without having to change the schemas in the FROM statements,

For example, say we have two notebooks, Customer and Sales in a DLT pipeline.

In Customer we have the following code :

 

 

 

 

@Dlt.table()
def CUSTOMER_TABLE():
    df = spark.sql("""  
        
        SELECT
            Customer_KEY AS CUSTOMER_SK,
            Name         AS CUSTOMER_NAME
        FROM CUSTOMER.EXTRACT_CUSTOMER

 
    """)
    return df

 

 

 

CUSTOMER_TABLE gets saved to the schema DIMENSIONS in Unity Catalog.

In Sales we have:

 

 

 

@Dlt.table()
def SALES_TABLE():
    df = spark.sql("""  
        
        SELECT
            Item            AS ITEM_NAME,
            Amount          AS SALES_AMOUNT
			C.CUSTOMER_NAME
        FROM SALES.EXTRACT_SALES
		LEFT JOIN MAIN.DIMENSIONS.CUSTOMER_TABLE AS C
 
    """)
    return df

 

 

 

We can run the notebook Sales and see the results from SALES_TABLE via display(df). However, if we trigger the DLT pipeline, we get the following warning:

"
Your query 'SALES_TABLE' reads from '<catalog>.<schema>.CUSTOMER_TABLE' but must read from 'LIVE.CUSTOMER_TABLE' instead.
Always use the LIVE keyword when referencing tables from the same pipeline so that DLT can track the dependencies in the pipeline.
"

If we then change from MAIN.DIMENSIONS.CUSTOMER_TABLE --> LIVE.CUSTOMER_TABLE, we can run the DLT pipeline. But running the Sales notebook
itself doesn't work, we get an error that says:

"
Failed to read dataset 'CUSTOMER_TABLE'. Dataset is not defined in the pipeline.
"

Right now we have a workaround that dynamically changes the schema names with the help of a parameter in the DLT pipeline
from (Settings --> Advanced --> Configuration).

Is there a better solution to this? It feels natural to be able to run DLT pipelines and the individual notebooks without
having to change schema names.

 

1 REPLY 1

Hi @Retired_mod, thank you for your response! I had a question about the suggestion in point 2. I did some research before this and I couldn't find a way to build a function that could recognize the context (pipeline or standalone). Could you expand on how that function could look and/or which commands I need to use?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group