would it even care to go through this memory grant queue or right away spill to the disk?
It doesn't work that way. Once a plan was chosen that requires a memory grant, the query must obtain the grant so it goes through the queue. Spilling, if any, occurs much later in the execution cycle. A query cannot decide to go ahead without the grant and 'spill' instead. It must get the grant and, based on that, start the execution. If the grant turns out to be insufficient (due to bad estimate or due to obtaining a much lower grant than requested) then the query will be forced to spill.
if query optimizer realize that due to bad statistics or missing indexes
Strictly speaking it's not the optimizer, it's the query execution. Query Optimizer has only a say in deciding the plan, but once the plan is chosen and launched into execution the optimizer is out of the picture. Also, 'missing indexes' play no role here. A missing index can force a bad plan, but it cannot influence how that 'bad' plan is executed since that plan was built considering exactly what indexes actually exist so it knows exactly what it can and cannot do. Spills occur almost always due to bad estimates, ie. bad stats or bad cardinality estimation algorithms in the optimizer and in the execution (it gets really complex if you dig into the details so I'll stop here).
Even if system has plenty of workspace memory available during the execution once the query exceeded the granted workspace memory during the execution it will has to spill to the disk
Unfortunately, yes. However the optimizer should come up with a plan that leverages the available RAM.
I recommend reading Understanding SQL server memory grant, which is the best info on the subject by a wide margin.
This isn't a complete answer, as I admit I don't know the complete answer, but can't fit into a comment.
Short detour - regarding SQLOS memory there is a wealth of information coming from a guy that worked on the internals of SQL Server storage team - Slava Oks. He has lots of articles on MSDN blogs regarding the SQLOS memory manager and they are all worth a read. Now I know we're on to SQL Server 2012, but the info is valuable nonetheless.
Now, regarding your first question, if there's a limit of max memory taken by db page cache before being cleared out and how is it controlled, I'll quote his word on it.
From article SQLOS's memory manager and SQL Server's Buffer Pool:
Buffer Pool commits pages on demand. Depending on internal memory requirements and external memory state, it calculates its target, amount of memory it thinks it should commit before it can get into memory pressure. To keep system out of paging target is constantly recalculated. Target memory can't exceed max memory that represents max server memory settings. Even if you set min server memory equal to max server memory Buffer Pool will only commit its memory on demand. You can observe this behavior by monitoring corresponding profiler event.
From this article's comment I understand that SQL Server uses a LRU algorithm (LRU = least recently used). If you really need to go in depth, SQL Server's LRU algorithm in particular is described in the following research paper: The LRU-K Page Replacement Algorithm For Database Disk Buffering.
Now, for your second question, the log caches are not really exposed so they can't be changed/tuned, as they are a part of the SQLOS internal memory structure. Quoting from an answer from MSDN forum:
Essentially each database has a log buffer/cache for its transaction
log which is a small contiguous memory allocation, that is 60K in size
and used by active transactions, and flushed when a transaction
commits, or when the buffer space fills up. Multiple active
transactions can be interleaved in the log buffer, and if any one
transaction commits, the buffer memory is written to the transaction
log file by the log writer. If the space fills up, the contents get
flushed to disk allowing the buffer/cache memory to be reused. This
is why log IO can range for 512bytes to 64K (its actually 60K), but it
is always sequential so varying write sizes doesn't matter that much.
The format of the log records is different than data pages as
documented in the BOL
(http://msdn.microsoft.com/en-us/library/ms190925.aspx), so the size
of a log record is variable, but the size of the log buffer/cache is
consistent.
Best Answer
In this context, multi-page allocations means SQL Server needs more than 8K of memory, and it will allocate multiple 8K pages to hold that data.