cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

TABLE_OR_VIEW_NOT_FOUND of deep clones

adisalj
New Contributor II

Hello community,

We're cloning (deep clones) data objects of the production catalog to our non-production catalog weekly. The non-production catalog is used to run our DBT transformation to ensure we're not breaking any production models. 

Lately, we have experienced several cases with certain schemas that all the tables and views in there throw this error class: TABLE_OR_VIEW_NOT_FOUND. As previously mentioned, we started to face this issues very recently.

As a workaround we have deleted and cloned the tables again, but this is not a viable solution for much longer. 

Anybody experiencing or has experienced similar issues with clones?

Thanks

4 REPLIES 4

Kaniz_Fatma
Community Manager
Community Manager

Hi, @adisalj. Facing issues with TABLE_OR_VIEW_NOT_FOUND errors after cloning data objects can be frustrating.

 

Let’s explore some potential reasons and solutions:

 

Schema Mismatch:

  • Ensure that the schema of the cloned tables and views matches the schema expected by your DBT transformations.
  • Verify that the non-production catalogue has the same schema structure as the production catalog.

Dependency Order:

  • Check if there are dependencies between tables/views within the schema. Sometimes, a table or view might reference another one.
  • Ensure that the order of cloning considers these dependencies. Cloning tables/views in the correct order can prevent errors.

Permissions and Access:

  • Confirm that the DBT transformations user has the necessary permissions to access the cloned tables/views.
  • Verify that the non-production catalog has the same access rights as the production catalog.

Metadata Refresh:

  • After cloning, refresh the metadata in your non-production catalog. This ensures that the catalog is aware of the newly cloned objects.
  • Some platforms might require manual metadata refresh or cache clearing.

Logging and Debugging:

  • Enable detailed logging during the cloning process. Check the logs for any specific error messages related to TABLE_OR_VIEW_NOT_FOUND.
  • Investigate the logs to identify which specific table or view is causing the issue.

Automated Validation:

  • Consider setting up automated validation checks after cloning. For example, run a script or query that verifies the existence and structure of the cloned objects.
  • If any discrepancies are found, trigger an alert or notification.

Database Engine Specifics:

  • Different database engines (e.g., PostgreSQL, MySQL, SQL Server) might have specific behaviour during cloning.
  • Research engine-specific documentation or community forums to see if there are known cloning-related issues.

Version Control and Rollbacks:

  • Maintain version control for your catalog objects. If issues arise, you can roll back to a known working state.
  • Regularly test the cloning process in a non-production environment to catch any issues early.

Remember that debugging such issues often involves a combination of trial and error, thorough investigation, and collaboration with your database administrators or platform support. If the problem persists, contact the Databricks support team for further assistance. 🛠🔍

adisalj
New Contributor II

Hi Kaniz,

The issue only persisted for a certain timeframe and everything is working as expected. What worked was a full refresh of the clones instead of a REPLACE function.

I will investigate in detail if this error occurs again.

Best,
Adis

karthik_p
Esteemed Contributor

@adisalj have a small question how you are handling deep cloned data in target, are you created managed table with data that is being clone into target. can you please post sample query that you are using between your catalogs to do deep clone.

i am facing Issue while trying to map data that i got from deep clone within target (eg: using same source table ddl in target), it is only creating empty table with no data

adisalj
New Contributor II

Hi kathrik_p,

We have a Python notebook that iterates over the schemas that exist in the production catalog and exclude certain  schemas in the iteration (such as information_schemas).

The actual deep clone command looks like this `CREATE OR REPLACE {target} DEEP CLONE {source}`.
We use deep clones, since we use the staging catalogues for testing and otherwise the DBT transformations aren't working. Have a look at this documentation about Delta clones 

 

Best,
Adis

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group