cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

DLT: "cannot run a cell when connected to pipeline databricks"

mkEngineer
New Contributor III

Hi,

I have several different cells in my notebook that are connected to a DLT pipeline. Why are some cells skipped and others aren't?

I get the message "cannot run a cell when connected to the pipeline Databricks" when try running a cell when I'm connected to the pipeline compute and not a serverless/all-purpose cluster. Why does this happen? The goal is to utilize the cells in the notebook that aren't being run because those cells are refreshing my Power BI workspace.

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

TakuyaOmi
Contributor III

Hi, @mkEngineer 

When working with Delta Live Tables (DLT) in Databricks, you cannot run individual cells interactively as you would in a standard Databricks notebook.

DLT Pipeline Behavior:

  • Delta Live Tables notebooks are executed as part of a managed pipeline. When you attach your notebook to a DLT pipeline, the execution of the cells is managed entirely by the pipeline runtime.
  • The pipeline runtime does not allow arbitrary execution of cells. It executes only the cells required to define the tables, views, or streams specified in the DLT configurations (@dlt.table, @dlt.view, etc.).
  • Cells that do not contribute directly to the DLT pipeline, much like magic commands (e.g., %py, %sql, and %run), are ignored by the runtime, with the exception of %pip within a Python notebook.

View solution in original post

1 REPLY 1

TakuyaOmi
Contributor III

Hi, @mkEngineer 

When working with Delta Live Tables (DLT) in Databricks, you cannot run individual cells interactively as you would in a standard Databricks notebook.

DLT Pipeline Behavior:

  • Delta Live Tables notebooks are executed as part of a managed pipeline. When you attach your notebook to a DLT pipeline, the execution of the cells is managed entirely by the pipeline runtime.
  • The pipeline runtime does not allow arbitrary execution of cells. It executes only the cells required to define the tables, views, or streams specified in the DLT configurations (@dlt.table, @dlt.view, etc.).
  • Cells that do not contribute directly to the DLT pipeline, much like magic commands (e.g., %py, %sql, and %run), are ignored by the runtime, with the exception of %pip within a Python notebook.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group