If you're talking SQL Server, I have some specific advice about making database changes backward compatible. The ideas and concepts are quite similar to other platforms, but the code samples won't help much.
The basic idea is this: make your changes in such a way that they don't require downtime and that they don't break existing code. For example, if you add a new parameter to a stored procedure, don't make it required. You can start out by making it optional, this way you can change the application to support the new parameter later, and it doesn't break in the meantime (and the code in the procedure can use logic to decide what to do, depending on whether the parameter was supplied). This can allow you to update multiple applications in a staggered way instead of all at once, and also allows you to change the application to the old behavior without also having to roll back the database changes. You can then make the parameter required after all the applications have been updated to support it.
Big depends here, because I have no idea what kind of scope your database changes encompass, but here are some ideas about how I accomplished this for specific changes:
Make your SQL Server database changes backward compatible when adding a new column
Make your SQL Server database changes backward compatible when dropping a column
Make your SQL Server database changes backward compatible when renaming an entity
Make your SQL Server database changes backward compatible when changing a relationship
That all said, your deploy scripts for databases should be using the same type of methodology as your code, including repeatable and verifiable scripts stored in source control as opposed to hand-written instructions about where to point and click in some UI to make a change. Why should your mentality about how to deploy database changes really be any different from how to deploy code cha
... was hoping I could get ... a rough rough estimate of what we should be running.
Without more information about your queries and data sizes, it's really difficult to give you any kind of estimate, let alone an accurate estimate.
Database: sql server 2008 r2 enterprise database
Windows: Windows 2008 r2 Enterprise 64 bit, pretty sure running on
VMware.
Processor: Intel(R) Xeon(R) CPU E7-4860 @ 2.27GHz 2.26 GHz (2
processors)
Installed memory: 4GB
Two processors (I'm assuming this is exposed in the VM as 2 cores) may or may not be under-provisioned. The cores assigned to a VM aren't necessarily mapped directly to physical cores (or even allowed to use 100% of a single core when it's needed!), so you may find this is a more flexible resource than memory. Without any more information about your workload or hardware/virtualization configuration, I would say increasing this to 4 would be nice-to-have.
Memory allocation. Oh boy. This is grossly under-provisioned for the workload. Windows itself needs a bare minimum of 2-3 GB to stay happy, and each of the 2 users running BIDS on the box will require at least 500 MB each. And with that, the box is maxed out already, and I didn't even start figuring out how much the database is going to need.
Majority of users interact with database through asp.net website and Report server website.
You didn't say, but if these are running on the same box, memory requirements for them need to be taken into account as well.
Finally, we have a fairly involved data warehousing operation that probably brings in 3 million records per day by way of SSIS packages that also run on the server.
Assuming this runs at night when there are no live users on the system, I don't see this as a problem unless it's taking too long to run. This part of things is the least of your worries; live users are more important.
Our previous requests for additional memory has been denied with the common response that we need to perform more query optimizations.
As I demonstrated above, the current amount of memory that's been provisioned is completely inadequate. At the same time, though, at the other end of the spectrum, it's exceedingly unlikely you'll be able to get enough memory provisioned to be able to keep the entire database in memory at once.
Even though you got a blanket response like that (which, by the way, probably had more to do with how persuasive your justification for additional resources was, and not the actual resource usage itself), it's highly likely the efficiency of the database could be improved. Yet there's no amount of tuning alone that can fix the issues you're experiencing now; the suggestion of that is a complete non-starter to me.
I would take the overall approach that the amount of memory currently provisioned is below the minimum required (which should be corrected ASAP), and additional resources may be required to improve the user experience to a usable level while improvements are made to increase the efficiency of the systems.
Here are a few thoughts (in order of attack):
You will win if you can prove how much performance improves every time you get more resources provisioned. Keep track of performance metrics using Performance Monitor logging (note: the logging part is very important), including website response times if you can. Start doing this now, before doing anything else. When you do finally get to the minimum amount of memory (you aren't going to get 32 GB right away), suddenly you now have evidence that the added memory improved things... which means adding even more would probably help, too! If you don't collect a baseline on the current configuration, you're going to miss the boat when things are bumped up to the minimum recommended level.
Analyze your server's wait statistics. This will tell you what the biggest bottleneck in the system is. You'll probably have PAGEIOLATCH_XX
as the most common/highest wait time, which indicates too much I/O is being done to fetch pages from disk. This can be alleviated by adding memory, so the physical I/O's become less frequent as the needed data is already in memory. While this analysis is pretty much a foregone conclusion, the fact you've gathered these stats at all gives you more ammo when justifying the need for resources.
As I mentioned above, the bare minimum requirement for memory is not being met. Collect the set of recommended hardware requirements for all the software you're running, and maybe also grab screenshots of Task Manager. This alone should be enough to justify at least 4-8 GB more, on the spot. If they still refuse, try to convince them to allow you to try it out for a week, and give it back after that (you're collecting performance stats, so you won't need to give it back because mid-week you'll be able to prove how much it's improved the situation). If they still refuse, you're being set up to fail; URLT.
If you can offload some of the workload (in particular, avoid remoting in if at all possible), this will increase the amount of memory available for the database, which is more critical.
You won't be able to fit the entire database in memory at once, which means you need to set SQL Server's max memory setting very carefully to prevent memory over-commit, which kills performance like nothing else. Over-commit is actually even worse than simply not being able to fit all the data in memory. It's highly likely you're in this scenario right now simply because there's just no memory available at all, and it's probable that the max memory setting is set to the default (unlimited).
Since you're running SQL Server Enterprise Edition, and memory is at a premium, I would strongly consider implementing data compression. This will trade off an increase in CPU usage for space-savings of memory (and hence reduced disk accesses, which are comparatively very slow).
Tune the database. It's likely the structures and queries could use improvements as far as indexing and access patterns go. Also, if a lot of data is being frequently scanned and aggregated, creating indexed views, summary tables, or precomputed reports may be very helpful.
This might be a longshot because it probably means more hardware provisioning, but implement a caching solution. The fastest query is the one you never make.
Those are just a few ideas. The bottom line is that tuning alone will not solve the problems here, nor will hardware alone, even though the latter probably will alleviate the majority of the immediate issues. That's really how it goes: throw hardware at the problem in the short-term to put out the fire, and throw tuning at the problem in the long-term to fix the root cause as best you can.
Best Answer
Demonstrating ROI on hiring additional DBAs is hard since DBAs don't generate revenue by themselves. They do however allow the organization to function and generate revenue, just like any other supporting staff. A lack of staff in any department makes the organization function less optimal and lose money or lose opportunity. So there is a cost from lack of investment rather than a return on investment.
If you feel you need additional DBAs you must know what tasks you are unable to perform because of lack of time/skill, if it's lack of skill (such as the SSAS stuff you mentioned) that can be trained, if it's lack of time it could be temporary or structural. I've been in situations where you just know that your life would be a lot easier if you could just automate this and that or set up some alerting but just never get around to because of too much reactive or repetitive work and general fire fighting.
Once you have determined why your team isn't functioning as well as it should it's time to document it.
First of all, document what you do now, why that takes up all your time and make clear up front to your manager the existing problems that need solving are absolutely none of your team's fault. Make sure you don't give the impression that the problem would be solved by hiring 'better' DBAs.
Make an excel sheet showing your workload for the coming weeks if you can, charts impress, and it can show your workload in a single slide.
Once you are clear on that, document all additional tasks and everything that needs improving, but especially document what the business risks and possible costs are, and the savings could be. Also document how much man-hours you think would be needed to make those improvements and perform those tasks.
Things that go on the investment side of the equation are
Things that go on the savings side of the equation are
Once you have compiled the list as complete as you can it should be clear what the best price/reward option is