Postgresql – Ensuring no unwanted tables have changed after a risky operation is performed

postgresqlpostgresql-11

Sometimes I want to make sure that after a risky operation such as database moving into another server or schema changes via migration scripts no data corruption or loss (due to human error and not the DBMS itself) has occurred to tables that is supposed not to change.

Therefore, I want somehow to check that all my data located in tables are safe and sound in a quick and easy way, therefore I thought it would be good idea to hash each one of my table via a cryptographically secure hash function to current database. Afterwards, once the action has performed, I could recalculate the hashes for each table and check the differences via custom scripts.

The question is how I can hash each table and its data in order to store the hash result?

For example if My database has the following tables:

table1
table2
table3

If I keep hashes on each table then coparing each hashes for each table I could have a quick view which tables has changed. Then if a hash is different on a table that I expect not to, I could smell a data corruption

My database layer is PostgreSQL 11.

Best Answer

psql -Atq -c 'SELECT * FROM atable ORDER BY id' | md5sum