Does anyone have any links to performance benchmarking of the database engine, or any anecdotal reports of the performance improvements in SQL Server 2012. We are only using the DB engine,not reporting services or BI. Any advice / gotchas from an upgrade point of view?
Sql-server – SQL Server 2012 performance v SQL Server 2008 R2
sql-server-2008-r2sql-server-2012
Related Solutions
Business Intelligence Edition
Business Intelligence edition has some useful features, like Master Data Services and non-additive aggregations (i.e. anything but sum/count). EE has partitioning and the rest of the large database features. The EE features are mostly relevant to users with large data volumes. If you have less than (say) 100GB of data then you can probably get by with BI edition. B.I. edition also has a limit on the number of CPU cores and memory that can be used by the database server, although this does not appear to apply to Analysis Services or Reporting Services.
A more detailed breakdown of the S.E., B.I. and E.E. features can be found here.
Some things that will be fine with B.I. edition
Most OLAP applications - B.I. edition seems to give you the clever aggregates (last non empty etc.) and other features that SE doesn't on the OLAP server. By the look of the link, all SSAS features present in EE are present in B.I. edition, which makes it a bit more of a contender for data marts.
MDM applications - B.I. edition comes with Master Data Services.
Moderate data volumes. You can probably get away with (say) 100GB or so on BIE by applying brute force at the hardware level (fast disks).
B.I. edition supports distributed partitioned views, which gives you a basic read-only sharding capability. However, the additional hardware and licensing may be no cheaper than biting the bullet and getting EE.
SSRS seems to be the same across B.I. and Enterprise editions.
Memory and CPU core limits do not apply to SSAS and SSRS servers.
Some things you will need Enterprise Edition for
If you have compliance requirements for physically secure data then the encryption and audit facilities of EE may be desirable. Note that this is new in 2012.
Table partitioning is an EE only feature. If you want to use table partitions to manage large data volumes you will need EE.
Star join transformations are only supported in EE. If you have an application with lots of highly selective (<1% coverage) queries on a very large fact table you might get a win from star transformations. This feature isn't really well documented in SQL Server circles, though, so it's hard to tell how well it works in practice.
Columnstore index - if you want to use this for fast ROLAP applications (using report builder or a 3rd party ROLAP tool such as Business Objects) then you may get significant mileage from this feature on EE.
Table compression may be useful for archiving old data.
B.I. edition only supports servers of a certain size - 64GB RAM, 4 sockets or 16 cores for the database server. If you want to scale above a two socket machine then you will probably need EE.
B.I. edition is only licensed on a 'Server + CAL' basis.
Parallel DBCC and index builds are only supported in EE. If you want to drop and recreate indexes for ETL loads this may reduce your run-times, particularly on incremental loads onto large incumbent data sets.
EE has a query re-write facility (called 'automatic use of indexed view by query optimiser'). If you want to use these to boost ROLAP performance you may want EE. However, although this is quite a mature feature on Oracle I can't really vouch for how well it works on SQL Server in practice - although SQL Server does have a CUBE operator on GROUP BY, which is mostly indended for this application.
EE has fast Oracle and Teradata adaptors for SSIS, and adaptors for various other 'enterprisey' sources such as SAP BW.
Some of the MDM-ish featues of SSIS - e.g. fuzzy lookups - are only available in EE.
Change Data Capture is an Enterprise Edition only feature.
Really Short Answer - In Place is okay. You can review your configuration afterwards and implement the best practices for SQL Server 2012.
A Longer Answer on SQL Server Upgrades/Migrations
So this is an opinion thing and there isn't a necessarily wrong or right answer but I prefer migration style upgrades over in-place for a lot of reasons. That being said - some of my clients for various reasons have had no choice but to do an in-place and really since SQL Server 2005, in-place upgrades haven't been as bad as they used to be.
Why I Prefer A Migration to an In-Place Upgrade
- Easier Rollback - If something goes wrong you can rollback by simple saying "we aborted the upgrade.. Please change connection strings to old server while we resolve this". With an in-place you are fixing it or you are down.
- Refresh Hardware - Hardware changes rapidly. You can easily get stuck on hardware that was right for your company 4 years ago but not right for today and the next four years with an in-place upgrade. You will likely have to do a migration at some point anyway for the new hardware.
- Feel Better - Sure... This one is subjective, but it feels good knowing you are starting with a new OS installation, a new SQL installation with no cobwebs from the person on the job before you (or you before you knew what you knew today) that could possibly cause you headaches in the future.
- New OS - A migration gives you a chance to start with a new OS version if you aren't on the latest and greatest today.
- You Can Test It - Ever want to get a set of baselines on a new machine before you install SQL and cloud it up with databases and usage? You can do that now.
- Sometimes it is easier to sneak in best practices - Maybe the SQL Server service account was a local admin. Maybe Builtin Administrators is in the SA server role. Maybe things have been sort of hacked together to make it work before. You can fix that all and start fresh.
- Free test environment and extra sleep - It is a great benefit to have an environment you can work in ahead of the actual cutover day when you make this new environment live. Doing a migration to a new environment means you can build it during business hours, well ahead of your actual cutover day and test it in many ways ahead of time. You can run full regression testing on all applications and systems for days and have a great peace of mind before you actually do the final set of restores/attaches and cutover all applications and access to the new environment.
- You don't have to do it all at once - A very common situation I bump into is an environment that is trying to consolidate to just a few instances. Perhaps one per version, perhaps one per "tier" and version. A lot of these projects have different timelines for various applications and databases based on testing, project plans and vendor certification timeliness. Doing the migration means you can move those databases that are ready, when they are ready and still handle requests for those databases that can't move for one reason or another.
Mind you I am not saying you have to do this as a migration. In-Place works and it works well if you are not planning on the buying new hardware in your budget and can't do that for this upgrade. The support in the upgrade process is so much better than it was in the 6.5 days so you are not putting yourself in a bad position doing this.
If you do plan on doing in-place for dev/test but want to do a migration for production you might consider doing at least one migration before production. This way you can work out your checklist ahead of time and deal with any potential issues that you weren't thinking of.
Attach/Detach vs. Backup/Restore for Migrations
If you decide to go with the migration approach, there is still one more decision you may still have a debate over and that is how you move your database to the new environment. You can either detach your database from the old server and attach it to the new or back it up and restore it there.
I prefer backup/restore. The biggest advantage I hear about detach/attach is that it saves some time. For me backup/restore wins for a few reasons:
- Keep the old accessible - This allows you to still have an accessible database on the source server. detach/attach should do the same, but it will require a few steps and there is room for human error with detach/attach that could complicate this.
- You are guaranteeing that you have a backup - Instead of just taking a database from a detach and potentially forgetting a backup step, you've made sure that you've taken that backup.
- Human error - If you delete the wrong file, forget where you are sending something or otherwise mess up your steps, you risk much by moving the data and log files around for your database. Now you can mitigate this by copying instead of cutting (and if you do detach, you should get out of the cut and paste habit) but you could stil mess up. SQL Server is no longer locking those files and it's just too easy to delete a file accidentally for me to risk it.
- It isn't really that slower - Taking a backup and copying it is a bit more time, but it isn't that much that I am willing to pay the extra risk for it. In fact - using full recovery model and log backups, you can take the downtime down even lower for cutovers as described below in "How to Make the Migration Approach Faster"
If you do decide to do the backup/restore - that means your old source database will still be online. I like to bring that database offline after taking the backup. I sometimes go a step further and take the whole SQL instance offline after I've scripted out security, jobs, linked server, certificates, database mail settings, and other instance wide information. This avoids an issue during testing where someone says "Everything looks great!" only to realize a day or two later that they have been talking to the old database on the old server. Taking those databases offline or the whole instance offline allows you to prevent those false positives and the mess they make.
How to Make the Migration Approach Faster
You can minimize the downtime required for the cutover from an old to new environment for a busy production environment with little downtime by utilizing the full recovery model. Basically - stage the environment you are migrating to by restoring the latest full backup, any differential backups and any already taken log backups specifying NORECOVERY
- then all you will have to do for the final cut over is restore the log backups that were not yet restored and the final log backup you wish to restore specifying WITH RECOVERY
. This way for a large database, the actual cutover downtime window can be drastically minimized by paying for the cost of the full, differential and most log restores ahead of the downtime window. Thanks to Tao for pointing this out in the comments!
How to Make the In-Place Upgrade Safer
A few things you can do to improve your experience and results when choosing the in-place approach.
- Backup - Take appropriate backups of all user and system databases of your environment ahead of time and ensure they are good (I'm paranoid.. I actually would restore them someplace first to really know they are good.. May be wasting your time.. But you may thank yourself in the event of a disaster).. Script out any configuration information about the SQL and OS installation at that environment.
- Test things well before you start - Verify that you have a good environment and good databases. You should be doing things like looking at the error logs and running DBCC CHECKDB on a regular basis, but before doing an in-place upgrade is a great time to start. Fix any issues ahead of time.
- Ensure OS health - Don't just make sure that SQL is healthy, make sure your server is healthy. Any gnarly errors in your system or application error event logs? How is your free space?
- Prepare for the worst - I had a blog post series awhile ago that went on the premise that if you aren't preparing for failure - you are actually preparing to fail.. I still believe that. So think through the issues you may have and deal with them accordingly ahead of time. Get yourself in the "failure" mindset and you will think of things that you wouldn't have otherwise.
The Importance of Upgrade or Migration Checklists
If you do decide to do an upgrade (whether in place or migration) you should seriously consider creating a checklist and using this checklist in each environment. You should include a bunch of things in this checklist, not the least of which:
- At the Start - Do some things like perform a test upgrade, test your applications on the latest Database Compatibility level and consider running a tool like the SQL Server Upgrade Advisor ahead of time to see what sort of tasks you need to complete before doing the SQL Server upgrade or migration.
- Pre Steps - Cleanup, OS tasks, patching ahead of time, preparing applications for the upgrade (clean shutdowns, connection string work), backups, etc.
- Upgrade/Migration Steps - Everything you have to do for the upgrade or migration to succeed and in the right order. Installation, changing (or not changing depending on your testing and approach) compatibility mode changes to databases, etc.
- Post Migration/Upgrade Steps - Various tests, post new version or new server configuration options, best practice implementation, security changes, etc.
- Rollback Steps - All along the way you should have rollback steps and milestones. If you get this far and this happens, what will you do? What are "do a full rollback" criteria? And how do you do that rollback (reverse connection string changes, change back settings, go back to the old version, reinstall if an in place, point back to the old server if a migration, etc.)
And then have the person who will be doing the production upgrade follow the checklist in some environment other than production - especially one that closes resembles production if possible ("South of prod", as I say...) and note any issues or points where they had to divert from the checklist or improvise because of a lack in the checklist. Then get the changes merged in and have fun with your production change.
I can not over stress the importance of testing thoroughly post migration or upgrade and ahead of your migration enough. Making a rollback decision in the midst of an upgrade should be an easy one - especially during a migration. If there is something uncomfortable, rollback and figure it out if you can't troubleshoot it effectively and reliably in the heat of the migration. Once you are live on this new environment and users connect - rollback becomes a difficult task. You can't restore a SQL Server database to an earlier version. That means manual work and data migrations. I always wait a couple weeks to kill the old environment but you should do all you can to avoid needing that old environment by finding all of your issues before your live users ever touch the new environment. Preferably before you even kick off the upgrade/migration.
Quick note about SQL Server Reporting Services migration/upgrade Migrating an SSRS installation isn't quite a herculean task that many think it is. This technet/books online article is actually quite handy. One of the most important admonitions in that article is "Back up the encryption keys" especially if you have a lot of saved sensitive information like scheduled report e-mail recipient e-mail addresses, connection information for a multitude of connections, etc. You can ask one of my clients from awhile back how important that is. They know because I messed that step up and spent quite a lot of time modifying report schedules and connection string permissions.
Related Question
- Sql-server – While restoring copy of SQL Server 2008 backup file in SQL Server 2008 R2 degraded performance
- SQL Server Upgrade – Moving from SQL Server 2008 to 2012
- Sql-server – SQL Server 2008 mirroring migration to SQL Server 2012 always on
- Sql-server – Error when upgrading SQL Server from 2012 (sp3) to 2014
- SharePoint 2010 – Poor SSRS Rendering Performance
- SQL Server – Upgrading to SQL Server 2017 from SQL Server 2012
- Sql-server – Need to upgrade SQL Server from 2012 to 2017 on a Server 2008 box
Best Answer
There are some underlying performance enhancements in filestream and full-text search. I don't have the details handy. I am also sure there are other underlying enhancements that aren't publicized, as well as fixes from 2008/R2 that made it into service packs and cumulative updates on those platforms that were ported forward into SQL Server 2012. That would be a bit more quiet because while 2012 was in development there wasn't a need for a public KB article for every change.
Some of the new windowing functionality will make calculations like running totals perform way better. And OFFSET can be used to make child's play out of things like calculating a median. FORCESEEK also allows you to now actually specify the index you want used, instead of taking whatever SQL Server thought you meant. But those require code changes.
There is also going to be more predictable performance with regard to memory use - the memory manager now handles more than just single-page allocations, so "Max server memory" has a much more accurate meaning than it has in prior versions. Resource Governor memory settings also take advantage of this, and can be used for scheduler/NUMA affinity as well.
Troubleshooting performance is also a lot easier. For example there is a lot more information included in execution plans, and extended events has almost full parity with trace (and adds much more powerful causality tracking).
I was surprised that they didn't publish TPC stuff well in advance of the release date (not that those tests really represent real-world stuff, but it's fun to watch the "my dad can beat up your dad, at least today" arguments unfold between the vendors). Glenn Berry blogged about one of the SQL Server 2012 TPC-E results. I apologize but I don't have time to investigate whether there is an easy way to compare this apples to apples against similar benchmarks using 2008 or 2008 R2. The problem with aged benchmarks is that they're never on the same hardware, so while the new benchmarks are always faster (or else they won't get released), it's impossible to tell how much of the improvement was based on the hardware and how much on the software.
Your best bet is to install SQL Server 2012, restore a copy of your database, update compat level, update statistics, and run your worklaod on your hardware using your data.