No: local instance can't know when remote tables are updated.
Also, I suggest you use SPIDER
instead of CONNECT
.
You cannot have "concurrent" updates of a single row. The best you can do is to make them fast.
If you don't need the value, do it in a single statement:
UPDATE user SET files = files + 1 WHERE id = 1
Also, be sure that autocommit=ON
is configured.
This will easily handle 100 increments per second on a spinning drive. If you need even more speed, then set
innodb_flush_log_at_trx_commit = 2
This will cut back significantly on disk hits, at the potential loss of data for up to a second (in the case of a power failure).
max_connections = 25000
is grossly unreasonable; if you get more than a few dozen connections stumbling over each other, the system will appear to 'hang'. At which point, more and more connections will be started, while the running connections run slower and slower. This will because the OS is sharing the CPU, etc, among too many threads to actually get anything completed.
If you need thousands/second, I'll provide you with some other ideas. (Note: I say "per second", not "concurrently".)
Best Answer
First of all- you should create different database users for different access roles of applications- even they use the same data. That will make easier to monitor each individual app. Even if you have to share the user name, you can create different accounts with different source ips (or ip ranges) to differentiate user activity.
You have several options for query auditing that can work better, faster and more fine-grained than the general query log, for example:
Use the slow query log. You can change the slow_query log limit to log only queries that take more that certain amount of seconds, or to log only certain percent of all queries. It contains the user name. This has a bit more control over what exactly you want to log, and then you can use tools like pt-query-digest to summarize what was going on.
Use tcpdump. In extreme cases, you can sniff your own traffic (if it is not encrypted), to identify queries happening between 2 servers. As any network connection, you can set where you sniff, and what kind of traffic/port you use, so that can quickly tell you what is going on: https://www.percona.com/blog/2008/11/07/poor-mans-query-logging/ There are some utilities to convert traffic to a more usable format, including the above pt-query-digest tool.