a) Pretty much anything takes a schema stability lock. You don't want something else changing the structure of the table while you are updating your statistics. According to this, update statistics takes schema stability and modification locks.
b) If something tries to change the table's structure, it will be blocked. IIRC, update stats does dirty reads, so it shouldn't block connections that are merely reading or writing.
c) If you use FULLSCAN, it will read the entire table because that is what you told it to do. I don't see how that can be seen as anything but 'causing heavy i/o'. Normally the default of 'sampling' works well enough, but I have seen it cause problems with data with non-homogenous distributions. Often, it's also easier to just reindex the whole table (especially if you can do it online) because reindexing is parallelizable where as update statistics isn't. (AFAIK, MS did not fix that in sql 2008.)
Less than 5 percent of RAM is available or less than 64 MB of RAM is available
Less than 500 MB of free disk space
(Total Latch Wait Time) / (Latch Waits/ Sec) < 10
Looking at first two alerts it says that SQL Server might be facing memory crunch. It would be better to determine using perfmon counters. I use below counters to check how much memory SQL Server needs. Taken from Here
SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR):
If your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
SQLServer:Buffer Manager--Page Life Expectancy(PLE):
PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE. But it is not,I read it from Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative) how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB. So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.
SQLServer:Buffer Manager--CheckpointPages/sec:
Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.
SQLServer:Buffer Manager--Freepages:
This value should not be less you always want to see high value for it.
SQLServer:Memory Manager--Memory Grants Pending:
If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article: http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx.
SQLServer:memory Manager--Target Server Memory:
This is amount of memory SQL Server is trying to acquire.
SQLServer:memory Manager--Total Server memory
This is current memory SQL Server has acquired.
Few Points
1.If Target server memory is greater than Total server memory there can be memory pressure. Let me put emphasis on word can be ,it is not a sure shot signal.Please refer to this MSDN forum thread where OP had target server memory greater than total server memory but because there were no memory grants pending ,and page life expectancy was high so there was no memory pressure .
Generally on stable system these 2 values are equal.
Free Pages counter is removed from SQL Server 2012. And also its value does not holds importance as the values for BCHR,PLE,Target server memory and Total Server memory
The last one is number of latches which could not be granted immediately. Are you seeing queries going to suspended state. Latches are common and necessary they are lighter form of locks. Did you checked top waits does they include latch_XX as top wait types
Best Answer
I don't really understand what you don't understand in the quote you included (strangely, as a screenshot), as it explains the difference quite clearly, in my opinion.
Locks and latches have different scopes and lifecycles. Locks apply to what you might call database physical model elements -- tables, rows, index entries. Latches protect various memory structures the database server uses when executing SQL statements or performing its housekeeping tasks.
A transaction might hold one, multiple, or no locks at all on the objects it is processing, which signals to other transactions what access they can have to those objects. An object protected by a lock doesn't have to be "in memory"; for example, a table protected by a table-level lock may not even have any of its pages present in the bufferpool.
Worker threads acquire and release latches to strictly prevent other concurrently running threads (that may be executing tasks within the same transaction, other transactions, or on behalf of some server background process) from simultaneously accessing certain memory areas. For example, two transactions may hold locks for different rows on the same bufferpool page, which would not prevent them from concurrently accessing their respective rows, if it weren't for the page latch that ensures the entire page remains consistent for all readers and writers. And then there is a lazy writer process, which couldn't care less about any of those locks but still must acquire a latch before it can write a consistent page out to disk.
In other words, locks are a transaction synchronisation mechanism while latches help synchronise processes or threads.