cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks DLT execution issue

CJOkpala
New Contributor II

I am having an issue when trying to do a full refresh of a DLT pipeline.

CJOkpala_0-1748441402775.png

 
I am getting the following error below:
 
com.databricks.sql.managedcatalog.UnityCatalogServiceException: [RequestId=97d4fe52-b185-4757-b0b1-113cb96ae0bb ErrorClass=TABLE_ALREADY_EXISTS.RESOURCE_ALREADY_EXISTS] Table 'animal_settings_hie_dim' already exists
 
When I try to read from the table 
select * from aadp_dev.edwa_stream.animal_settings_hie_dim
 
CJOkpala_0-1748426421678.png

 

I receive the following error below:
[STREAMING_TABLE_OPERATION_INTERNAL_ERROR] Internal error during operation SELECT on Streaming Table: Please file a bug report. SQLSTATE: XX000
 
Could you advise me what to do about it?
 
4 REPLIES 4

Renu_
Contributor III

Hi @CJOkpala, it looks like the error is happening because the table animal_settings_hie_dim already exists. To fix this, start by dropping the existing table:

DROP TABLE IF EXISTS aadp_dev.edwa_stream.animal_settings_hie_dim;

Next, update your DLT pipeline to use the @dlt.create_or_refresh_streaming_table decorator. This allows the pipeline to handle table recreation properly during full refreshes.

@dlt.create_or_refresh_streaming_table(name="animal_settings_hie_dim") def create_table(): return dlt.read_stream("source")

Once you've dropped the table and updated the pipeline, run a full refresh. This should recreate the table cleanly and resolve the issue.

CJOkpala
New Contributor II
Thanks for your response. But we did not find any documentation for @dlt.create_or_refresh_streaming_table decorator.
 
However we create many tables dynamically using dlt.create_streaming_table(name=table_name) which I doubt would be supported by the decorator.
 
Surely we tried dropping the tables first. But if we drop the table and run the pipeline again, we receive another error:
java.lang.IllegalStateException: Soft-deleted MV/STs that require changes cannot be undropped directly. If you need to update the target schema of the pipeline or modify the visibility of an MV/ST while also undropping it, please invoke the undrop operation with the original schema and visibility in an update first, before applying the changes in a subsequent update.
 
Hope you can help us further.

CJOkpala
New Contributor II

Can someone help me with this further? I am still having the error.

nikhilj0421
Databricks Employee
Databricks Employee

Are you facing the same issue, If you give a different name in the dlt decorator for the table? 

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now