In MySQL, you can use the feature called --safe-updates
(--i-am-a-dummy
) to limit the number of rows updated per query.
http://dev.mysql.com/doc/refman/5.0/en/mysql-tips.html#safe-updates
Is there such a thing in MS SQL Server?
MySQLsql-server-2008
In MySQL, you can use the feature called --safe-updates
(--i-am-a-dummy
) to limit the number of rows updated per query.
http://dev.mysql.com/doc/refman/5.0/en/mysql-tips.html#safe-updates
Is there such a thing in MS SQL Server?
While I don't know of any tool that provides this information specifically, Percona Server does have a setting (log_slow_verbosity=full) that enables extended statisitics in the slow log along with microsecond granularity. The data does not explicitly include CPU utilization, but it does provide a number of additional statistics about the internals of MySQL and InnoDB that may help you determine whether or not the problem is CPU related or a result of poor optimization.
This is an example of the additional information in the slow log:
# Time: 120114 6:34:33
# User@Host: user[user] @ [10.10.10.10]
# Thread_id: 28313080 Schema: mydb Last_errno: 0 Killed: 0
# Query_time: 0.588882 Lock_time: 0.000068 Rows_sent: 3 Rows_examined: 183839 Rows_affected: 0 Rows_read: 100
# Bytes_sent: 121 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0
# InnoDB_trx_id: 9903E4DB1
# QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No
# Filesort: No Filesort_on_disk: No Merge_passes: 0
# InnoDB_IO_r_ops: 0 InnoDB_IO_r_bytes: 0 InnoDB_IO_r_wait: 0.000000
# InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000
# InnoDB_pages_distinct: 11359
Also, pt-query-digest from the Percona Toolkit understands this additional information and will summarize the slow query log for you and produce output, per query, that looks like this:
# Query 1: 0.09 QPS, 0.07x concurrency, ID 0x20112A1CCDBDBA4E at byte 4337645
# Scores: Apdex = 1.00 [1.0], V/M = 0.01
# Query_time sparkline: | ^ |
# Time range: 2012-01-14 06:34:33 to 08:51:57
# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count 63 775
# Exec time 59 539s 522ms 855ms 695ms 816ms 87ms 640ms
# Lock time 18 60ms 43us 498us 76us 103us 36us 66us
# Rows sent 1 3.15k 0 38 4.17 12.54 4.90 2.90
# Rows examine 22 135.93M 179.38k 179.90k 179.60k 174.27k 0 174.27k
# Rows affecte 0 0 0 0 0 0 0 0
# Rows read 0 3.43k 0 118 4.53 20.43 11.93 0.99
# Bytes sent 1 103.07k 82 576 136.18 246.02 63.85 118.34
# Merge passes 0 0 0 0 0 0 0 0
# Tmp tables 0 0 0 0 0 0 0 0
# Tmp disk tbl 0 0 0 0 0 0 0 0
# Tmp tbl size 0 0 0 0 0 0 0 0
# Query size 5 222.51k 294 294 294 294 0 294
# InnoDB:
# IO r bytes 0 0 0 0 0 0 0 0
# IO r ops 0 0 0 0 0 0 0 0
# IO r wait 0 0 0 0 0 0 0 0
# pages distin 63 8.44M 10.66k 12.72k 11.15k 11.34k 357.67 10.80k
# queue wait 0 0 0 0 0 0 0 0
# rec lock wai 0 0 0 0 0 0 0 0
# String:
# Databases mydb
# Hosts
# InnoDB trxID 9903E4DB1 (1/0%), 9903E4E09 (1/0%)... 773 more
# Last errno 0
# Users user
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms
# 100ms ################################################################
# 1s
# 10s+
# Tables
# SHOW TABLE STATUS FROM `mydb` LIKE 'table'\G
# SHOW CREATE TABLE `mydb`.`table`\G
# EXPLAIN /*!50100 PARTITIONS*/
...followed by the query itself.
I have blogged a bit about using pt-query-digest and other tools for finding performance bottlenecks (and promised more blogging, but haven't gotten around to it yet).
Another tool which may help you is pt-stalk and pt-collect. This will allow you to set a threshold based on a running server variable, such as when the number of concurrently running threads jumps up (Threads_running) and then collect a wealth of data about the running system, including cpu, I/O, MySQL statistics, the running processlist, etc to give you the ability to perform a thorough post-mortem.
The additional statistics are only available in Percona Server, but these tools will all work with MySQL GA.
While @ohlin's answer has a decent workaround, it still requires setting up an entire MySQL instance with the undo logs configured the way you want. Then, you have to failover the application to the new MySQL Instance. That still introduces a few minutes of logical downtime from the perspective of the application because of changing the IP addresses in the code or changing DNS.
Can I move the undo log outside of ibdata1 in MySQL 5.6 on an existing server?
The answer is definitely NO. Why ? Given the following diagram
Look at the following options for the location of the undo logs
By default, the undo logs reside in the system tablespace file known as ibdata1. Undo logs has been the cause of the greatest amount of uncontrolled growth in the face of many transactions (See my post How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?)
To relocate ibdata1, you must configure innodb_data_home_dir and innodb_data_file_path. They are global variables, but they are not dynamic. Since a mysql restart would be required to change these settings, the undo logs buried inside ibdata1 cannot be moved for a live MySQL instance.
There are options to configure the undo logs on different disks:
Both innodb_undo_tablespaces and innodb_undo_directory are global variables, but they are not dynamic. Since a mysql restart would be required to change these settings, the undo logs outside ibdata1 cannot be moved for a live MySQL instance.
When I read this question, it reminded me of a blog from a FaceBook DB Engineer that configured InnoDB on different disks.
I referred to his blog in my post How do I determine how much data is being written per day through insert, update and delete operations?
Even with this idea, it requires configuring the undo logs in a different disk location from the very beginning. Doing this after mysql is running is simply not possible.
Best Answer
There may or may not be such a thing in MS SQL, but why would you want this?
There is already a way to limit what
UPDATE
affects (theWHERE
clause).There is already a mechanism to safeguard your data when making changes (Transactions).
Combining the two, we get the general-case solution: "Do your updates in a transaction, with appropriate WHERE clauses to limit what they touch, and make sure the results look right before you commit":