cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 

Comparing two SQL notebooks from different Environments

AtulMathur
New Contributor II

Hello Everyone,

I am part of data testing team which is responsible to verify data trends and insights generated from different sources. There are multiple schemas and tables in our platform. We use SQL queries in notebooks to verify all enrichment, mappings and aggregation tests. Before we go live in any release, we do a dry run in test environment. This involves a critical step of importing production data schema to test environment.

Here is my problem now: I want to verify that this step is successful and during this data copy from Prod to Test Env, we did not missed any tables or schema or any data within. My idea was to create two notebooks in SQL - one in Prod, one in Test. This SQL will contain list of all tables and querying the number of rows and few distinct checks.

What is the best and fastest way to do this comparison?

1 ACCEPTED SOLUTION

Accepted Solutions

AtulMathur
New Contributor II

Thank you Walter. I did thought about it doing it one by one but then it was not coming out to be very efficient way. I  then found a way to do it in Python via iterating through a dataframe of table names.

View solution in original post

2 REPLIES 2

Walter_C
Databricks Employee
Databricks Employee

List All Tables: In each notebook, write a SQL query to list all tables in the respective environment. You can use a query like: 

SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'your_schema_name';

Count Rows and Perform Distinct Checks: For each table, write SQL queries to count the number of rows and perform a few distinct checks. For example:

SELECT COUNT(*) AS row_count
FROM your_table_name;

SELECT COUNT(DISTINCT your_column_name) AS distinct_count
FROM your_table_name;

AtulMathur
New Contributor II

Thank you Walter. I did thought about it doing it one by one but then it was not coming out to be very efficient way. I  then found a way to do it in Python via iterating through a dataframe of table names.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonā€™t want to miss the chance to attend and share knowledge.

If there isnā€™t a group near you, start one and help create a community that brings people together.

Request a New Group