cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Seeing results of materialized views while running notebooks

AndreasB
New Contributor II

Hi!

My team is currently trying out Delta Live Tables (DLT) for managing our ETL pipelines. An issue we're encountering is that we have notebooks that transform data using Spark SQL. We include these in a DLT pipeline, and we want to both run the pipeline as a whole, and go into a specific notebook, run that and be able to see the materialized views that we create (we use dlt.table()), without having to change the schemas in the FROM statements,

For example, say we have two notebooks, Customer and Sales in a DLT pipeline.

In Customer we have the following code :

 

 

 

 

@Dlt.table()
def CUSTOMER_TABLE():
    df = spark.sql("""  
        
        SELECT
            Customer_KEY AS CUSTOMER_SK,
            Name         AS CUSTOMER_NAME
        FROM CUSTOMER.EXTRACT_CUSTOMER

 
    """)
    return df

 

 

 

CUSTOMER_TABLE gets saved to the schema DIMENSIONS in Unity Catalog.

In Sales we have:

 

 

 

@Dlt.table()
def SALES_TABLE():
    df = spark.sql("""  
        
        SELECT
            Item            AS ITEM_NAME,
            Amount          AS SALES_AMOUNT
			C.CUSTOMER_NAME
        FROM SALES.EXTRACT_SALES
		LEFT JOIN MAIN.DIMENSIONS.CUSTOMER_TABLE AS C
 
    """)
    return df

 

 

 

We can run the notebook Sales and see the results from SALES_TABLE via display(df). However, if we trigger the DLT pipeline, we get the following warning:

"
Your query 'SALES_TABLE' reads from '<catalog>.<schema>.CUSTOMER_TABLE' but must read from 'LIVE.CUSTOMER_TABLE' instead.
Always use the LIVE keyword when referencing tables from the same pipeline so that DLT can track the dependencies in the pipeline.
"

If we then change from MAIN.DIMENSIONS.CUSTOMER_TABLE --> LIVE.CUSTOMER_TABLE, we can run the DLT pipeline. But running the Sales notebook
itself doesn't work, we get an error that says:

"
Failed to read dataset 'CUSTOMER_TABLE'. Dataset is not defined in the pipeline.
"

Right now we have a workaround that dynamically changes the schema names with the help of a parameter in the DLT pipeline
from (Settings --> Advanced --> Configuration).

Is there a better solution to this? It feels natural to be able to run DLT pipelines and the individual notebooks without
having to change schema names.

 

2 REPLIES 2

Kaniz_Fatma
Community Manager
Community Manager

Hi @AndreasBThe issue arises because DLT requires the use of the LIVE keyword to track dependencies within the pipeline, but this conflicts with running individual notebooks outside the pipeline context.

  1. You can continue using your current workaround but make it more streamlined by defining a parameter that switches between the LIVE context and the actual schema name. This way, you can run the notebook in both contexts without manually changing the schema names.
  2. Create modular functions that can be reused in both the DLT pipeline and standalone notebooks. For example, you can define a function to get the schema name based on the context (pipeline or standalone) and use this function in your SQL queries.
  3. Create views that abstract the schema names. For instance, you can create a view that points to LIVE.CUSTOMER_TABLE when running in the pipeline and to MAIN.DIMENSIONS.CUSTOMER_TABLE when running standalone. This way, your SQL queries remain the same, and the view handles the context switching.
  4. Consider submitting a feature request to Databricks for better support of this use case. The ability to seamlessly switch between pipeline and standalone contexts without changing schema names would be a valuable addition to the platform.

I hope this helps! Let me know if you have any other questions or need further assistance.

Hi @Kaniz_Fatma, thank you for your response! I had a question about the suggestion in point 2. I did some research before this and I couldn't find a way to build a function that could recognize the context (pipeline or standalone). Could you expand on how that function could look and/or which commands I need to use?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group