At first glance, it seems like the most appropriate thing to do would be to make the column storing the test result value nullable, and declare an additional column, perhaps an ENUM
, for the out-of-range condition. For example
CREATE TABLE ...
...
result_value DECIMAL(6,3) DEFAULT NULL,
out_of_range ENUM('low','high') DEFAULT NULL,
...
When a test comes back with an out-of-range result, it would seem like you wouldn't want to store anything in the `result_value` column, because when it comes to averages or other statistics the low or high threshold value would completely skew your average... but when you take the AVG()
of a data set that includes NULL
the denominator used to calculate the average is the total number of not-NULL
values... so AVG(5,NULL,NULL,10,NULL) would be 7.5.
Setting a flag that indicates that your result was, instead, out of range, would allow you to easily tally those values separately, and an ENUM
column with a small number of possible values requires only 1 byte of storage per row... the "low" or "high" would actually be stored as the byte 0x01
or 0x02
, with the corresponding labels stored in the table definition only.
When the minimum threshold is "5" the number "5" is not a meaningful number when it comes to analysis, so I would think that you wouldn't likely want to store that in the `result_value`. If you did, your analytic queries would have to go to extra steps to exclude those from calculations.
I would suggest that you do need a table that specifies the high and low boundaries for each test, because such a table could be used to constrain the data going into the results table via triggers, blocking the insertion of out-of-range data, in addition to being valuable information when doing analysis.
"But we already know those values." The database also needs to "know" those values, because subsequent analysis will be more straightforward if those values can be joined and understood by a query.
Also, I would suggest that if there is any possibility that the sensitivity, selectivity, granularity, precision, range, or whatever the proper term might be, of any test could change -- due to improved technology or whatever the reason, that you consider each test to actually be a 1 of N possible subtypes of a type of test (where initially, N is 1). If the possible range (or other property) of a result is different, then that's technically, arguably, a different test, even though it's measuring the same thing... and these are the kinds of things that are a small hassle now, and a much bigger hassle later.
needing to add new ones easily isn't a concern.
Famous last words. Speaking of bigger hassles later, that's generally not a good position to hold, even if it seems unshakable now.
A final word about a common design error I sometimes encounter: If you are using the FLOAT
or DOUBLE
data types, reconsider that decision, because they are imprecise. A DECIMAL
column has a fixed precision and stores exactly the value you insert. Data stored in FLOAT
and DOUBLE
columns are stored as approximate values which can lead to problems with comparisons. This limitation is one of floating point arithmetic in general, and not of MySQL in particular; it is something with which you may already be familiar but I thought it worth mentioning.
Sometimes, running FLUSH HOSTS;
will clear up the host cache when you can no longer connect. Why ?
According to the MySQL Documentation on FLUSH HOSTS
Empties the host cache. You should flush the host cache if some of your hosts change IP address or if the error message Host 'host_name' is blocked occurs. (See Section B.5.2.6, “Host 'host_name' is blocked”.) When more than max_connect_errors errors occur successively for a given host while connecting to the MySQL server, MySQL assumes that something is wrong and blocks the host from further connection requests. Flushing the host cache enables further connection attempts from the host. The default value of max_connect_errors is 10. To avoid this error message, start the server with max_connect_errors set to a large value.
When you restart mysql, it clears the host cache in a very heavy-handed manner. Therefore, to apply some preventative to this problem, you should be setting max_connect_errors to 1000000000000 (Yes, that's a trillion). Since max_connect_errors is dynamic, a mysql restart is not needed. Just run
mysql> SET GLOBAL max_connect_errors = 1000000000000;
then add the variable to my.cnf like this
[mysqld]
max_connect_errors = 1000000000000
Give it a Try !!!
Best Answer
Those are counters; they are useless without dividing by
Uptime
orConnections
. I would say something is wrong ifIf you exceed any of these guidelines, look into
max_connections
(but not too much), and/orwait_timeout
"gone away" comes from network glitches and/or trying to hang onto a connection "too long". See the various
%wait_timeout
settings.