Sql-server – CPU Time Slower in SQL Server 2014 than in SQL Server 2008

performancesql serversql server 2014sql-server-2008

We’re upgrading a virtualized database server from an old SQL Server 2008 R2 Standard (vmware) to a new shiny SQL Server 2014 Standard server (hyper-v). We were doing some performance testing and to our surprise it seems that CPU times are worse in the new server system compared to the old, crackling one.
These are the systems:

Old Server:

  • Runs in Virtual Machine VMWare ESX 5.1
  • Windows Server 2008 R2 32 bits
  • SQL Server 2008 R2
  • Quad Core Xeon ES-2620 2Ghz
  • 5600MB RAM

New Server:

  • Runs in Virtual Machine Hyper-V UEFI Release v1.0
  • Windows Server 2012 R2
  • SQL Server 2014
  • Quad Core Xeon ES-2640 2.6Ghz
  • 32766 MB RAM

A little history of what we did:

  • Two different benchmark tests shows that CPU is about 80% faster, memory access is about 100% faster and IO disc access is about 1000% faster in the new server.
  • Testing queries are using multiple joins with non-clustered indexes.
  • In both servers the testing database is exactly the same.
  • In both servers energy power option is set as High Performance and also checked with CPU-Z that the processor is always at max speed.
  • In both servers “locked pages” are enabled.
  • In both servers the statistics are properly updated and query plans are the same when executing the test queries.
  • To make sure that the plan are always the same we’re executing the queries in SQL Server 2014 with compatibility mode 100 (SQL Server 2008), but as well without this compatibility-mode the outcome is the same.
  • Max memory options is set to default value in both servers (2147483647) and we have also checked the parallelism (max degree is set the same for both and also the cost)
  • The new server is dedicated (there is nothing else running there)

After all these checks we’re getting better time results for cached queries (caused only by CPU time) still in the old server.

Test query:

SELECT ROW_NUMBER() OVER(ORDER BY c.field1, c.field2) AS RowNumber, a.t1_t3ID, c.field1, c.field2, a.t1_t2ID, b.field1, a.BeginDate, a.EndDate, c.field3
FROM table1 a
INNER JOIN table3 c ON a.t1_t3_ID = c.t3_PKID
INNER JOIN table2 b ON a.t1_t2ID = b.t2_PKID
WHERE '20140101' BETWEEN a.BeginDate AND a.EndDate

Query Execution Plan (is the same for both servers, with the same percentages):

Query Execution Plan Here

The Results for CPU time on both servers:

Old server:

  • CPU time = 93 ms, elapsed time = 65 ms.
  • CPU time = 94 ms, elapsed time = 99 ms.
  • CPU time = 94 ms, elapsed time = 70 ms.
  • CPU time = 93 ms, elapsed time = 63 ms.
  • CPU time = 93 ms, elapsed time = 67 ms.

New server:

  • CPU time = 125 ms, elapsed time = 309 ms.
  • CPU time = 141 ms, elapsed time = 304 ms.
  • CPU time = 139 ms, elapsed time = 288 ms.
  • CPU time = 156 ms, elapsed time = 277 ms.
  • CPU time = 142 ms, elapsed time = 323 ms.

Note: we have also checked that the CONVERT_IMPLICIT is not affecting
performance in this case.

What we observed is as well if we force to use only one processor-core
in the old server CPU time keep roughly the same. If we do same in the
old server the execution-plan is switching to table-scans and CPU time
increase up to 800ms.

Somebody have an idea what still can be checked/tested or at least have an explanation on this observation? Or is it for sure the virtualization systems (vmware vs. hyper-v) which cause such a difference in CPU time for databases?

Best Answer

If you have not done so, I would highly recommend:

1.> Installing the right version of Integration Services for Hyper-V on the host and guest(like VMWare Tools). This alone can be a performance boost.

2.> Within Hyper V manager, disable compatibility mode on the processors.

3.> Consider using a SCSI controller for data volumes.

4.> Consider disk sizes that are fixed rather than expanding.

Should you implement any of these techniques, I'd be interested in knowing if any performance improvement was gained.