SQL Server 2000 is incompatible with Windows Server 2008; that being said I have only ever tried to install it on Windows 7 (which shares the same code base as Windows Server 2008) and it outright rejected the install.
You might be able to install it but it will depend on the exact features that you select. For example: I know that you cannot install Reporting Services for instance because it depends on IIS6 which is not available for Windows Server 2008. There is a short section on SQL Server Central where someone has tried to install SQL 2000 on Windows Server 2008 and has come across problems.
If you must use SQL Server 2000 I would recommend putting it on a VM running a known compatible version of Windows Server - though to be frank, I'd rather move the databases to a higher version of SQL and run them in compatibility mode (though this may not be an option for you.)
I hope this helps you.
Sorry, not to disparage Thomas' advice, but please take "general rules" with a grain of salt, or just throw them out the window altogether.
Baseline.
What is normal for your system? Is the system currently responding ok?
If there is no performance issue, don't try to compare your system to some number someone plucked out of the air or potentially based off some very specific system and workload years ago, and drop everything to try to "fix" it.
Specifically, batch requests and compilations don't have a very nice and handy correlation in ALL scenarios. You need to understand your workload before you start panicking because your counters hit some threshold someone put in a post somewhere. If all of your batches consist of exactly one statement, then yes, having more compilations/sec than batch requests/sec might seem out of the ordinary (but still might not indicate a problem). In most cases, you are sending more than one statement in a batch. If this is the case - and particularly if you are using things like ORMs or a lot of highly variable dynamic SQL, where you will be suffering from a high number of compilations - I would really not be surprised to see one counter higher than the other.
Whether you need to do something about that, in that case, is a completely different problem.
Best Answer
Looking at first two alerts it says that SQL Server might be facing memory crunch. It would be better to determine using perfmon counters. I use below counters to check how much memory SQL Server needs. Taken from Here
If your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE. But it is not,I read it from Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative) how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB. So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.
Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.
This value should not be less you always want to see high value for it.
If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article: http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx.
This is amount of memory SQL Server is trying to acquire.
This is current memory SQL Server has acquired.
Few Points
1.If Target server memory is greater than Total server memory there can be memory pressure. Let me put emphasis on word can be ,it is not a sure shot signal.Please refer to this MSDN forum thread where OP had target server memory greater than total server memory but because there were no memory grants pending ,and page life expectancy was high so there was no memory pressure .
Generally on stable system these 2 values are equal.
Free Pages counter is removed from SQL Server 2012. And also its value does not holds importance as the values for BCHR,PLE,Target server memory and Total Server memory
The last one is number of latches which could not be granted immediately. Are you seeing queries going to suspended state. Latches are common and necessary they are lighter form of locks. Did you checked top waits does they include latch_XX as top wait types