cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 

Output Not Displaying in Databricks Notebook on All-Purpose Compute Cluster

Surya-Prathap
New Contributor

Hello All,

I’m encountering an issue where output from standard Python commands such as print() or display(df) is not showing up correctly when running notebooks on an All-Purpose Compute cluster.

Cluster Details

Cluster Type: All-Purpose Compute

Runtime Version: 17.3 LTS (includes Apache Spark 4.0.0, Scala 2.13)

Worker Type: Standard_D4ds_v5

Policy: Shared Compute

Issue Description

When executing cells containing PySpark code or loops, for example:

abfss_path = "abfss://fs-dev@stdev01.dfs.core.windows.net/dev/customers-100.csv"
df = spark.read.option("header", "true").csv(abfss_path)
display(df)


…the notebook executes successfully, but no output is shown in the cell, even though commands like df.show() or df.count() return results as expected.

Similarly, print() statements sometimes do not render any output.

The issue occurs only on All-Purpose Compute clusters — it works fine on Serverless clusters.

Observed Behavior

The cell shows ā€œExecutedā€ status, but no visible output.

If we add dbutils.notebook.exit() at the end, the returned result appears, but intermediate print() or display() outputs are missing.

Expected Behavior

The print() statements and display(df) outputs should appear in real time (or at least after cell execution), consistent with behavior observed in Serverless clusters or earlier runtimes.

Request

If there’s a configuration setting, known issue, or recommended workaround to resolve this behavior, please advise.

Thank you for your support!

2 REPLIES 2

Sahil_Kumar
Databricks Employee
Databricks Employee

Hi Surya,

Do you face this issue only with DBR 17.3 all-purpose clusters? Did you try with lower DBRs? If not, please try and let me know.

Also, from the Run menu, try ā€œClear state and outputs,ā€ then re‑run the cell on the same cluster to rule out stale REPL/UI state.

Finally, capture a minimal repro and check using this code:

import time

print("Start", flush=True)
for i in range(5):
  print(f"step {i}", flush=True)
  time.sleep(0.5)

abfss_path = "abfss://fs-dev@stdev01.dfs.core.windows.net/dev/customers-100.csv"
df = spark.read.option("header", "true").csv(abfss_path)
display(df.limit(20))
print("Done", flush=True)

If this shows ā€œExecutedā€ but no output, look in Driver logs to confirm prints are present there; that pinpoints a UI rendering/cell output size path versus code/compute

Hi Sahil, I’ve tried using multiple versions (17.3, 17.1,15.4 and 14.3) and also cleared the state and outputs, but I’m still facing the same issue. The code you shared is producing the result as it’s not displaying the complete table. The table I’m using is just a sample with 100 rows. I’ve tested the same CSV file in another workspace, and it displays all the rows there without any issues.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now