- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-04-2024 08:48 PM
Hello Everyone,
I am part of data testing team which is responsible to verify data trends and insights generated from different sources. There are multiple schemas and tables in our platform. We use SQL queries in notebooks to verify all enrichment, mappings and aggregation tests. Before we go live in any release, we do a dry run in test environment. This involves a critical step of importing production data schema to test environment.
Here is my problem now: I want to verify that this step is successful and during this data copy from Prod to Test Env, we did not missed any tables or schema or any data within. My idea was to create two notebooks in SQL - one in Prod, one in Test. This SQL will contain list of all tables and querying the number of rows and few distinct checks.
What is the best and fastest way to do this comparison?
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2024 06:56 AM
Thank you Walter. I did thought about it doing it one by one but then it was not coming out to be very efficient way. I then found a way to do it in Python via iterating through a dataframe of table names.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2024 06:09 AM
List All Tables: In each notebook, write a SQL query to list all tables in the respective environment. You can use a query like:
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'your_schema_name';
Count Rows and Perform Distinct Checks: For each table, write SQL queries to count the number of rows and perform a few distinct checks. For example:
SELECT COUNT(*) AS row_count
FROM your_table_name;
SELECT COUNT(DISTINCT your_column_name) AS distinct_count
FROM your_table_name;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2024 06:56 AM
Thank you Walter. I did thought about it doing it one by one but then it was not coming out to be very efficient way. I then found a way to do it in Python via iterating through a dataframe of table names.

