cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

add new column to a table and failing the previous jobs

leticialima__
New Contributor III

 

Hello community! 👋 I’m new to Databricks and currently working on a project structured in Bronze / Silver / Gold layers using Delta Lake and Change Data Feed.

I recently added 3 new columns to a table and initially applied these changes via PySpark SQL commands within our generic job files that handle streaming between layers. I later realized this might not be the best approach, so I dropped and re-added the columns using a Databricks notebook instead.

However, now my job is failing without a specific error. The job run shows the following message:

 

 
Cannot read the Python file source_to_bronze_loader.py. Please check driver logs.

Has anyone encountered something similar? What might cause this issue or how can I further troubleshoot it?

Any guidance would be appreciated. Thanks!

2 ACCEPTED SOLUTIONS

Accepted Solutions

Hi @leticialima__ ,

The failure is likely due to a non-additive schema change, such as dropping and re-adding columns. To handle such changes, you can set the schemaTrackingLocation option in your readStream query.
Also, ensure that column mapping is enabled on the table.

https://docs.databricks.com/aws/en/delta/column-mapping#streaming-with-column-mapping-and-schema-cha...

View solution in original post

Thanks!! It worked!

View solution in original post

4 REPLIES 4

Khaja_Zaffer
Contributor

Hello @leticialima__ 

Good day

Can you please share the error observed on the driver log. 

is it : [Errno 13] Permission denied or No such file or directory? Please let me know the error on the driver log. 

 

THank you. 

Hello, I gave up trying to solve it on Saturday, and today when I checked the failure log of the scheduled job, a new error appeared. the link in the error: https://docs.databricks.com/aws/en/delta/column-mapping

 

Screenshot 2025-08-04 at 08.59.13.png

Hi @leticialima__ ,

The failure is likely due to a non-additive schema change, such as dropping and re-adding columns. To handle such changes, you can set the schemaTrackingLocation option in your readStream query.
Also, ensure that column mapping is enabled on the table.

https://docs.databricks.com/aws/en/delta/column-mapping#streaming-with-column-mapping-and-schema-cha...

Thanks!! It worked!