If you want to size the innodb_buffer_pool_size bigger please run this:
SELECT CEILING(SUM(data_length+index_length)/POWER(1024,2)) RIBPS
FROM information_schema.tables WHERE engine='InnoDB';
This will give you the maximum amount of InnoDB Buffer Pool needed for your current data set.
Given the amount of data and index pages for your dataset, having 8MB Buffer Pool will just cause data and index pages accessed to rotate out and load new pages as needed. Setting it with the value based on that RIBPS query (RIBPS stands for Recommended InnoDB Buffer Pool Size), all of InnoDB will be accessed from RAM. Having the Buffer Pool too big would just wastes RAM.
Let's say that you ran this query and got the following:
mysql> SELECT CEILING(SUM(data_length+index_length)/POWER(1024,2)) RIBPS
-> FROM information_schema.tables WHERE engine='InnoDB';
+-------+
| RIBPS |
+-------+
| 1493 |
+-------+
Take that number and place it in /etc/my.cnf
[mysqld]
innodb_buffer_pool_size=1493M
and restart mysql.
I have discussed this before
Give it a Try !!!
Some details to clear things up
Based on that msdn page : server memory settings it's a good practice to keep the default values for SQL-Servers min and max memory to keep it dynamic.
No, it's not written it is a good practice
it says it's recommended
and I am sure MS books online cannot write it as good, this is because of fact that environment varies and what is good for one environment might not suit other. Its mostly good to have restriction on something which is heavily consumed although managed efficiently. I consider a good practice to define max server memory on system because it will restrict buffer pool and will tell SQL Server how much max a buffer pool can grow although SQL Server can consume memory outside buffer pool/max server memory setting if it heavily uses Third party DLL's extended stored procs and Linked servers.
Page file is used by Windows to hold temporary data which is swapped in and out of physical memory in order to provide a larger virtual memory set. Page file largely depends on how much memory OS is committing and changes accordingly as per min and max value set. if you want to monitor page file you must rely on perfmon counters. Please read this MSDN Blog
Memory: Committed Bytes --Number of bytes of virtual memory that has been committed. This does not necessarily represent page file usage - it represents the amount of page file space that would be used if the process was completely made nonresident
Memory: Commit Limit-- Number of bytes of virtual memory that can be committed without having to extend the paging files.
Paging File: % Usage-- Percentage of the paging file committed
Paging File: % Usage Peak-- Highest percentage of the paging file committed
Also, why should you have a min value? I know that a too low value can prevent the SQL-Server from starting and I know that SQL-Server keeps stuff in memory so doesn't try to release it's cache.
Min Server memory is minimum amount of memory available to the SQL Server Memory Manager for an instance of SQL Server. This comes into active picture when windows is facing memory pressure in such case it will raise Low Memry Notification Flag. SQLOS will respond to it, it would ask SQL Server to trim its memory consumption and free its cache then SQL Server will non preemptively trim down till its min server memory value. If pressure is too high SQL Server might not be able to react fast and then SQL Server process would be paged out. Remember min server memory does not say that when SQL Server starts it would minimum take this much memory if not required, SQL Server will start up consuming much less than min server memory.
Other use of Min server memory is if you do not want SQL Server to be paged out to disk because of some misbehaving OS/Third party driver. You set max server and min server memory to almost same value and give SQL Server service account Locked pages in memory privilege and in this case even if OS faces memory crunch SQL Server would max trim down to min server and if OS memory crunch is still there OS process would be paged out and would become extremely slow . But this is bad thing to do and I would not recommend
Best Answer
From my research it doesn't sound like anything terrible can really happen from adjusting that property, but it also generally doesn't need to really be adjusted because 30 seconds is typically a good value for an
Extended Events
session.You can read more about the property here but basically its purpose is to set a threshold for the max amount of time the captured events stay in the buffer in memory. Often times the captured events are written to their destination before hitting that threshold, so it's more of a catch all limit. Aside from that threshold, the buffer also gets flushed to the destination once the buffer is full.
Ultimately the reasoning behind having the buffer work this way rather than instantly writing to the destination every time is because I/O is less performant to write to then memory. Constant writes from the buffer to disk (e.g. a file in your case) could be bottlenecked by and/or cause bottlenecks to the disk. Doing batch writes to disk is most performant when there's a lot of data, and when there's not a lot of data then the
MAX_DISPATCH_LATENCY
threshold will handle writing your events to the file.