Hopefully, I'm missing or misinterpreting something, because I don't see an explicit SELECT ... LOCK IN SHARE MODE
as a viable way to approach this.
Using SELECT ... LOCK IN SHARE MODE
explicitly sets an IS lock ("intention shared") on the row(s) matched, but other threads can also set IS locks on the same row(s) at the same time, with no feedback from the server indicating whether anyone else already has such a lock on the row.
So, let's say two threads set an IS lock on the row, then both of them issue an UPDATE.
One unlucky thread sees this:
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
It's a deadlock because two threads are asking for mutually-exclusive things. Thread #1 is asking for its IS lock to be escalated to an X (exclusive) lock so it can update the row... but this can't be permitted because thread #2 already has an IS lock, which prevents #1 from getting its X lock ... and, thread #2 is also trying to escalate to an X lock, which is being blocked by thread #1's IS lock.
You might assume that the first thread to request the lock would be the one that gets to update the row, but that's not the case, either. My testing shows that in this case the winning thread is the one whose UPDATE is noticed first, not the one that gets an IS lock first. This is sensible, since that thread has done slightly more work than the other, and, after all, both threads had previously only expressed the intention to lock the row exclusively.
But... with all of that said, it seems to me that there's a much simpler approach to this, that would be far easier on your server.
If visitors to your site are trying to claim one specific timeslot, identified by a unique timeslot_id, then I'm not sure why this approach wouldn't work:
START TRANSACTION;
UPDATE timeslot_guest_map SET guest_id = ? WHERE timeslot_id = ? AND guest_id IS NULL;
SELECT ROW_COUNT();
COMMIT;
If ROW_COUNT() returns '1' then you got the time slot. If ROW_COUNT() returns 0 then sorry, someone else got there first. In this setup, InnoDB implicitly handles the row locking, and thread #2 will block until thread #1 commits, at which time guest_id will no longer be null on that row so it will not be updated.
On the other hand, if the "timeslots" are actually identical, such as might be the case when 150 "general admission" seats were available in an auditorium, the approach could be like this:
START TRANSACTION;
UPDATE timeslot SET user_id = ? WHERE user_id IS NULL LIMIT 1;
SELECT ROW_COUNT();
COMMIT;
Here, thread #1 implicitly gets an X lock on the first available row, updates it, and commits. Thread #2 has to wait for an implicit IS lock on the first row, because of the X held by thread #1... but when thread #1 commits, thread #2 gets its IS lock so it can examine that row, and finds out that the row no longer matches... so the next available row gets matched, locked, and updated, if there is one.
The workload involved in either of these scenarios sounds to me like it should be trivial for MySQL, assuming you take care to properly commit (or rollback) your transactions. Otherwise, you'll end up with a backlog of threads waiting on locks held by abandoned transactions.
As @Valor suggested, you're far more likely to have problems providing your web server with the resources it will need in order to handle the concurrent connections... as was likely the case here and here, to cite a couple of recent examples of MySQL being the victim of memory exhaustion on the server, not the perpetrator.
I ran headlong into this one 6 months
The only sane thing you can do is the following two steps
STEP 01
Add this to my.cnf under the [mysqld]
group header
[mysqld]
sql_mode = ''
STEP 02
Login as root@localhost
and run this
mysql> SET GLOBAL sql_mode = '';
Restarting mysql is not required.
STEP 03 (Optional)
STEP 02 only affects incoming connections after the change. You may have to kill all current DB Connections. Let the app reconnect and it will begin with sql_mode as blank. While restarting mysql is not needed for incoming connections, you could just run service mysql restart
instead of custom scripting the killing of DB Connections.
If you want to custom script the killing of your connections, please see some of my past posts for exmaples on how to kill many connections:
STEP 04 (Optional, Last Resort)
If you would like to make code changes, the only change would be to run
SET sql_mode = '';
as the first command for the connection. Then, you could run your regular queries thereafter.
GIVE IT A TRY !!!
NOTE #1 : The first two steps are more than enough for your app if you close your connection right after running your query. If your connections are persistent, then you will need STEP 03.
NOTE #2 : If you are not allowed to change the configuration, you could skip STEP 01 and STEP 02 and just run STEP 03 and STEP 04.
Best Answer
You just asked
The answer is yes because of a bottelneck. Where is this bottleneck?
Please have a look at the InnoDB Architecture (courtesy of Percona CTO Vadim Tkachenko)
Please note the lower righthand corner of the Memory Side of InnoDB. It's the Log Buffer. Where is log information flushed? Look at the lowere lefthand corner of the Disk Side of InnoDB. It's the Redo Logs.
In order to improve InnoDB's write performance, please note the following suggestions:
SUGGESTION #1
By default, the Log Buffer size (sized by innodb_log_buffer_size) is
8M (8388608)
.Please increase innodb_log_buffer_size to
64M
.SUGGESTION #2
By default, the redo logs (sized by innodb_log_file_size) is
48M (50331648)
.Please increase innodb_log_file_size to
1G
.HOW TO IMPLEMENT
Step 01 : Add these options to
/etc/my.cnf
(ormy.ini
for Windows)Step 02 : Login to MySQL and run
Step 03 : Shutdown MySQL
or from Windows Command Line as Administrator
Step 04 : Rename the redo log files
or from Windows Command Line as Administrator
Step 05: Start MySQL
or from Windows Command Line as Administrator
NOTE: This steps may take 2-3 minutes because it is creating two different logs files 1G each.
WHY CHANGE THESE SETTINGS?
When SSDs perform random writes to small redo logs, the wear and tear is loclalized to one section of the disk. Making the redo logs spreads out the writes from the log buffer. Making the log buffer bigger reduces frequency of writes in exchange for increase amount of data to write.
If an entire MySQL Instance is within an SSD, you must do this kind of tuning, go with a hybrid disk layout (See my old post MySQL on SSD - what are the disadvantages?).
MORE INFORMATION
Please read MySQL Documentation on Optimizing InnoDB Disk I/O for more InnoDB Tuning Options.