Sql-server – Inputs on handling large volumes of data for specific tables in SQL Server standard

partitioningsql-server-2005

I have large OLTP ERP database with 10 million records in few tables. The database size is as large as 10GB.
I have few critical transaction tables like this:
1. Payments
2. Cheques etc.,
and each of these tables have more than 30 columns and at least 10 million records each.

I have identity column as the primary key in all these tables and each of the tables is having the transaction date (which is as well non cluster indexed). Recently I found out that my INSERT, UPDATE and DELETE's on these tables is taking considerable time. The users are interested in accessing last 3 years of data (most likely, except for reporting). I would like to optimize this and came up with these two approaches:
1. Archiving all records and moving them to a new database for those records, which are more than 3 years old.
2. Creating staged tables for each of the years and moving the data into them and doing CRUD operations on an indexed view (with union of all the staged tables).

Since I'm not using SQL Server Enterprise I cannot create partitioned tables.

I'm thinking that the second method is appropriate and suitable than the first one since the maintenance is easy. Kindly suggest me if I'm correct. Also give your kind inputs on any other ideas.

Thanks in advance.

Best Answer

Use an archive table and a live table.

Don't use a "table per year": this quickly becomes messy and you'll end up needing dynamic SQL when you UNION + view becomes a bottleneck

10 million rows isn't a lot though, and 30 columns is quite wide. So, some options:

  • review data types to reduce data in disk.
    • Do you need nvarchar?
    • Can you use smalldatetime?
    • Do you have int where tinyint will do?
    • (n)varchar when you have fixed length data?
  • do you have regular index and statistics maintenance?
  • do you have incorrect or missing indexes?
  • have you looked at most expensive queries?