Sql-server – SQL OLTP Filegroups – Lookup Tables

filegroupsperformanceperformance-tuningsql server

According to resources, in Dimensional Warehousing, it is a good idea to place Dimension tables in a different filegroups from Fact Tables. In OLTP, is it a good idea to place Lookup tables in filegroups, from Transactional Tables. OLTP and Datawarehousing are different. (As Lookup tables are different from Dimensional Tables, Relational vs Kimball). Just wanted confirm this idea?

Microsoft FileGroups

SQL Saturday

• "Put different tables used in the same join queries in different filegroups. This will improve performance, because of parallel disk I/O searching for joined data.

• To maximize performance, create files or filegroups on different available disks as possible. Put objects that compete heavily for space in different filegroups.

• Put heavily accessed tables and the nonclustered indexes that belong to those tables on different filegroups. This will improve performance, because of parallel I/O if the files are located on different physical disks."

Best Answer

In theory, absolutely. In practice, it depends and most likely not.

Data warehouses will be a lot more likely to do full scans and disk reads/writes due to the nature of a DW. Optimizing a DW can be very important to get reports out in time, saving money in not buying expensive disks if not needed, and also to reduce wear/tear on the machines doing too many IOPS trying to get the same data. They will often do a big (or a few) table scan(s) and get all the relevant data which pushes a lot of items out to disk.

A lot of OLTP can be ran out of memory in a relatively managed environment where bad ad hoc queries aren't causing memory buffer issues. If for some terrible reason your OLTP environment is doing a lot of table scans, you see memory buffer pool pressure from table data, with high disk read/sec and disk write/sec metrics in perfmon, AND you're not able to fix the code, this would be helpful.

Note though with customization comes overhead. The additional complexity have to be properly documented, perhaps automated, and maintained. If you don't see any benefits to it then why introduce additional complexity?