I believe you seeing the symptoms of an issue with Windows 2003 requiring contiguous memory and this causes the running processes including Analysis Service to trim their Working Set memory. With large amounts of memory allocated this trim process can take a significant amount of time to complete, while this is running new allocations will be blocked, causing the process to stall.
This issue is fixed in Windows 2008 onwards this support article is for SQL Server but has all the links to the relevant pages http://support.microsoft.com/kb/918483 in particular this quote
In Windows Server 2008, the allocation of physically contiguous memory
is greatly enhanced. Requests to allocate contiguous memory are much
more likely to succeed because the memory manager now dynamically
replaces pages, typically without trimming the working set or
performing I/O operations. In addition, many more types of pages—such
as kernel stacks and file system metadata pages, among others—are now
candidates for replacement. Consequently, more contiguous memory is
generally available at any given time. In addition, the cost to obtain
such allocations is greatly reduced.
I believe that with this issue you will either have to upgrade to Windows 2008 onwards or test granting Lock Pages In Memory to the Analysis Services account. I have also had issues with memory allocation on Windows 2003 which requires a reboot to clear.
This SQLCAT blog compares the changes between Windows Server 2003 & 2008.
This blog mentions some of the issues of LPIM with Analysis Services (near the end)
Also this knowledge base article on working set trimming causing performance problems and this article shows how to Identifying Drivers That Allocate Contiguous Memory
Looking at the my.ini
, I have two suggestions
SUGGESTION #1
I would bump up the following settings in your my.ini
sort_buffer_size=4M
join_buffer_size=4M
This will make some joins and sort stay in memory. Of course, once a JOIN
or an ORDER BY
needs more than 4M
, it will page to disk as a MyISAM table.
If you cannot login as root@localhost
, then restart mysql with
C:\> net stop mysql
C:\> net start mysql
If you can login as root@localhost, you do not have to restart mysql to use these settings.
Just run this in the MySQL client:
SET @FourMegs = 1024 * 1024 * 4;
SET GLOBAL sort_buffer_size = @FourMegs;
SET GLOBAL join_buffer_size = @FourMegs;
SUGGESTION #2
Since your Data is on Drive D:
, you may have Disk I/O on Drive C:
.
Please run this query:
mysql> show variables like 'tmpdir';
+---------------+-----------------+
| Variable_name | Value |
+---------------+-----------------+
| tmpdir | C:\Windows\TEMP |
+---------------+-----------------+
1 row in set (0.00 sec)
Since I run mysql on my Desktop with defaults, my temp tables are being written to Drive C:
. If Drive D is a better disk than Drive C:
, perhaps you can map temp tables to Drive D:
by setting tmpdir in my.ini
as follows:
tmpdir="D:/DBs/"
You will have to restart mysql since tmpdir is not a dynamic variable.
Give it a Try !!!
UPDATE 2013-11-29 10:09 EST
SUGGESTION #3
Given the fact that MySQL is running in Windows and you cannot touch the queries in the core package, I have two ideas tat must be done together.
IDEA #1 : Move the Database to a Linux Machine
You should be able to
- Setup a Linux machine
- Install MySQL on the Linux machine
- Enable Binary Logging for MySQL in Windows
- mysqldump the database to a text SQL file
- Load SQL file to MySQL running in Linux
- Setup replication from MySQL/Windows to MySQL/Linux
IDEA #2 : Reconfigure Moodle to point to the Linux Machine
Moodle was designed for LAMP in the first place. Just change the config files to point to the Linux machine instead of localhost.
Here is a link to an old Moodle 2.3 doc on setting up MySQL : http://docs.moodle.org/23/en/Installing_Moodle#Create_an_empty_database
I am sure the latest docs are available as well.
What is the Point of Moving the Database to Linux ???
How does this help the temp table situation ???
I would then suggestion setting up a RAM disk as the target folder for your temp tables
Temp table creation will still happen, but it will be written to RAM rather than disk. reducing Disk I/O.
UPDATE 2013-11-29 11:24 EST
SUGGESTION #4
I would suggest revisiting SUGGESTION #2 with a fast RAID-0 disk (32+ GB), configuring it as Drive T: (T for Temp). After installing such a disk, add this to my.ini
:
[mysqld]
tmpdir="T:\"
MySQL restart would be required, using
net stop mysql
net start mysql
BTW I said RAID-0 on purpose so that you can get good write performance over a RAID-1, RAID-10. A tmp table disk is not something I would make redundant.
Without optimizing the queries as @RaymondNijland has been commenting on, you cannot reduce the temp table creation count in any way. SUGGESTION #3
and SUGGESTION #4
offer speeding up temp table creation and temp table I/O as the only alternative.
Best Answer
As others have said, there is no magic formula that will tell you how your database server will respond to a sudden "increase the volume by double the current one" (volume of what exactly, by the way).
The two things that can help you to get a general idea (neither of which you have, apparently):
Historical (over many months) data that show you the correlation between your server load, the volume of data, and the application workload (concurrent connections, user sessions, registered users, daily orders, or whatever makes sense in you case). "[P]erformance data for last 15 days" is completely useless.
Performance testing environment that's configured meaningfully similar to your production. In this day and age spinning up a virtualized test environment on one of the cloud providers and running it for a week costs peanuts, compared to the price you will pay for a production outage caused by unexpected performance problems. Performance burn-down tests should be a part of your release process.
If you start preparing both of these tomorrow, in half a year you will be able to answer your own question with some degree of confidence.