SQL Server – Best Practices for Safely Deleting a Database Permanently

deletemaintenancesql server

We have an "organic" environment, meaning people piled code on code for ten years with minimal oversight or documentation. The server I use has several databases which I believe are no longer being used; I'd love to delete them and leave just the three I actually use.

At the reckless extreme, I could disable these databases and wait for someone to scream; at the other I could leave them running forever "just in case". What steps have you found valuable in identifying whether a server is being used, and how?

Also, what steps would you recommend to ensure that, as one moves forward in disabling systems, that they remain conveniently reversible for a period of time (e.g., rename objects rather than deleting them outright)?

Thanks!

Best Answer

You also want to make sure of the datetime stamps of every table. Search for any metadata in the system for every table, order such a list by datetime last updated, and display the output in desc order by datetime. You could also check the table size for even the slight change in size.

For example, in MySQL 5.x, you have information_schema.tables which looks like this:

mysql> desc information_schema.tables;
+-----------------+---------------------+------+-----+---------+-------+
| Field           | Type                | Null | Key | Default | Extra |
+-----------------+---------------------+------+-----+---------+-------+
| TABLE_CATALOG   | varchar(512)        | NO   |     |         |       |
| TABLE_SCHEMA    | varchar(64)         | NO   |     |         |       |
| TABLE_NAME      | varchar(64)         | NO   |     |         |       |
| TABLE_TYPE      | varchar(64)         | NO   |     |         |       |
| ENGINE          | varchar(64)         | YES  |     | NULL    |       |
| VERSION         | bigint(21) unsigned | YES  |     | NULL    |       |
| ROW_FORMAT      | varchar(10)         | YES  |     | NULL    |       |
| TABLE_ROWS      | bigint(21) unsigned | YES  |     | NULL    |       |
| AVG_ROW_LENGTH  | bigint(21) unsigned | YES  |     | NULL    |       |
| DATA_LENGTH     | bigint(21) unsigned | YES  |     | NULL    |       |
| MAX_DATA_LENGTH | bigint(21) unsigned | YES  |     | NULL    |       |
| INDEX_LENGTH    | bigint(21) unsigned | YES  |     | NULL    |       |
| DATA_FREE       | bigint(21) unsigned | YES  |     | NULL    |       |
| AUTO_INCREMENT  | bigint(21) unsigned | YES  |     | NULL    |       |
| CREATE_TIME     | datetime            | YES  |     | NULL    |       |
| UPDATE_TIME     | datetime            | YES  |     | NULL    |       |
| CHECK_TIME      | datetime            | YES  |     | NULL    |       |
| TABLE_COLLATION | varchar(32)         | YES  |     | NULL    |       |
| CHECKSUM        | bigint(21) unsigned | YES  |     | NULL    |       |
| CREATE_OPTIONS  | varchar(255)        | YES  |     | NULL    |       |
| TABLE_COMMENT   | varchar(2048)       | NO   |     |         |       |
+-----------------+---------------------+------+-----+---------+-------+
21 rows in set (0.01 sec)

The column UPDATE_TIME records the last time any INSERT, UPDATE, or DELETE was last applied to the table. You could run queries like these to find out when each database was last accessed:

Last time a table was accessed in each database:

SELECT table_schema,MAX(update_time) last_accessed
FROM information_schema.tables
WHERE table_schema NOT IN ('information_schema','mysql')
AND update_time IS NOT NULL
GROUP BY table_schema;

Last time a table was accessed in any database:

SELECT MAX(update_time) last_accessed FROM information_schema.tables
WHERE table_schema NOT IN ('information_schema','mysql');

Last 10 dates a table was accessed:

SELECT * FROM
(SELECT * FROM
(SELECT last_accessed,COUNT(1) access_count
FROM (SELECT DATE(update_time) last_accessed
FROM information_schema.tables
WHERE table_schema NOT IN ('information_schema','mysql')
AND update_time IS NOT NULL) A
GROUP BY last_accessed) AA
ORDER BY last_accessed DESC) AAA
LIMIT 10;

These are just a few examples of how to get such metadata from MySQL. I'm sure Oracle and SQL Server have similar or better methods.

Once you are sure of how often or seldom a database (or schema) is accessed, you should manually dump/export aged databases along with copies of the schema itself apart from the data. Please excuse that my answer is not DB agnostic. SQLServer and Oracle DBAs should voice their answers here as well, since the concept of a schema being a collection within a database instance is blurred in MySQL but very strictly followed in SQLServer and Oracle.