I tried something similar just now
Here is MySQL for My PC
mysql> select * from information_schema.global_variables where variable_name='datadir' or variable_name like 'versio%';
+-------------------------+------------------------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+-------------------------+------------------------------+
| VERSION_COMMENT | MySQL Community Server (GPL) |
| VERSION | 5.5.12-log |
| VERSION_COMPILE_MACHINE | x86 |
| DATADIR | C:\MySQL_5.5.12\data\ |
| VERSION_COMPILE_OS | Win64 |
+-------------------------+------------------------------+
5 rows in set (0.00 sec)
I will run this using MyISAM
- Step 01) create a table called 'rolando'
- Step 02) insert 'dominique' and 'diamond'
- Step 03) copy the table structure to 'pamela'
- Step 04) alter 'pamela' to not have auto_increment
- Step 05) In DOS, copy rolando.MYD to pamela.MYD
- Step 06) run
REPAIR TABLE pamela;
(Rebuild pamela.MYI)
- Step 07) run
SELECT COUNT(1) FROM pamela;
- Step 08) run
SHOW CREATE TABLE pamela\G
- Step 09) run
SELECT * FROM pamela;
- Step 10) insert 'carlik' into pamela
- Step 11) run
SELECT * FROM pamela;
Let's see if these steps are kosher.
Here are Steps 1-4
mysql> drop table if exists rolando;
Query OK, 0 rows affected (0.02 sec)
mysql> drop table if exists pamela;
Query OK, 0 rows affected (0.00 sec)
mysql> create table rolando
-> (
-> name varchar(20),
-> id int not null auto_increment,
-> primary key (id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.05 sec)
mysql> insert into rolando (name) values ('dominique'),('diamond');
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> select * from rolando;
+-----------+----+
| name | id |
+-----------+----+
| dominique | 1 |
| diamond | 2 |
+-----------+----+
2 rows in set (0.00 sec)
mysql> create table pamela like rolando;
Query OK, 0 rows affected (0.05 sec)
mysql> show create table rolando\G
*************************** 1. row ***************************
Table: rolando
Create Table: CREATE TABLE `rolando` (
`name` varchar(20) DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=3 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
mysql> show create table pamela\G
*************************** 1. row ***************************
Table: pamela
Create Table: CREATE TABLE `pamela` (
`name` varchar(20) DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.02 sec)
mysql> alter table pamela modify id int(11) unsigned not null;
Query OK, 0 rows affected (0.11 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> show create table pamela\G
*************************** 1. row ***************************
Table: pamela
Create Table: CREATE TABLE `pamela` (
`name` varchar(20) DEFAULT NULL,
`id` int(11) unsigned NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
mysql> select count(1) from pamela;
+----------+
| count(1) |
+----------+
| 0 |
+----------+
1 row in set (0.01 sec)
mysql>
Here is Step 6
C:\>copy C:\MySQL_5.5.12\data\test\rolando.MYD C:\MySQL_5.5.12\data\test\pamela.MYD
1 file(s) copied.
C:\>
Here are the rest of the Steps starting at Step 7
mysql> repair table pamela;
+-------------+--------+----------+------------------------------------+
| Table | Op | Msg_type | Msg_text |
+-------------+--------+----------+------------------------------------+
| test.pamela | repair | warning | Number of rows changed from 0 to 2 |
| test.pamela | repair | status | OK |
+-------------+--------+----------+------------------------------------+
2 rows in set (0.03 sec)
mysql> select count(1) from pamela;
+----------+
| count(1) |
+----------+
| 2 |
+----------+
1 row in set (0.00 sec)
mysql> insert into pamela (name,id) values ('carlik',3);
Query OK, 1 row affected (0.00 sec)
mysql> select * from pamela;
+-----------+----+
| name | id |
+-----------+----+
| dominique | 1 |
| diamond | 2 |
| carlik | 3 |
+-----------+----+
3 rows in set (0.00 sec)
mysql>
Dangerous game, isn't it ???
Guess what? Stuff like this is actually published in "High Performance MySQL : Optimization, Backups, Replication, and more", Pages 146-148 under the Subheading Speeding Up ALTER TABLE. Page 147 Paragraph 1 says:
The technique we are about to demonstrate is unsupported,
undocumented, and may not work. Use it at your risk. We advise you to
back up you data first!
I also had an earlier post when someone ask a similar question : Can I rename the values in a MySQL ENUM column in one query?
You got guts, @atxdba !!!
Since the table is InnoDB
- Bulk Insert Buffer (bulk_insert_buffer_size) is no good for you because it handles bulk inserts for MyISAM tables only
DISABLE KEYS
/ ENABLE KEYS
only works for MyISAM. InnoDB handles secondary index processing in the system tablespace ibdata1.
You may have to resort to altering the InnoDB buffer protocol to handle only INSERTs :
Mysql load from infile stuck waiting on hard drive
Unless you plan to have store values in MLSNUMBER
bigger than 2147483647, you may want to consider making MLSNUMBER
an INT
(4 bytes) instead of BIGINT
(8 bytes) to save space on secondary index creation. If your values for MLSNUMBER
are less than 4294967296, maybe MLSNUMBER
should be INT UNSIGNED
.
Best Answer
readingID
need to beZEROFILL
?birthdate
need to beDATETIME
?That is, have only 2 tables -- One for Users, one for Readings.
If you graph things, please use dates for the x-axis. I saw one clinic that evenly spaced the readings across the x-axis. That's just plain bad.
For
weight
you could pickFLOAT
orDECIMAL(4,1)
orSMALLINT UNSIGNED
(4/3/2 bytes, sufficient for pounds or kg -- but be consistent on what you store)For Gender, is using
CHAR
, doCHAR(1) CHARACTER SET ascii
and allow more than the usual choices (these days!).If you want to disallow two sets of measurements in a single day, get rid of
readingID
and havePRIMARY KEY(memberID, date)
. This will have the side effect of making looking up all the info about a person more efficient.2M rows/year is "medium sized"; not a problem. By getting rid of
stat
, you are down to 200K rows/year, and the entire dataset will be closer to "tiny". The database would be perhaps 8MB/year. (Versus perhaps 100MB/year for EAV withstats
.)The insert rate would be 200K/year. MySQL can easily handle 200K/day, so the server would be very idle.
With my suggestions, should able to handle even a million rows without further tweaking. (With EAV, you would be struggling long before that.)
Why or why not EAV
EAV shines when there is a lot of variation in attributes. Your application has a fixed set of attributes -- Everyone has a pulse rate; at worst you might store
NULL
if that test was not performed.If, on the other hand, you choose to store the hundred(?) readings you can get from blood samples, ... Usually blood work focuses on a small number of possible tests. That is, the data is very sparse.
For blood work, I would recommend having a column of JSON text. (Depending on whether you are using a recent MySQL or MariaDB, there might be a
JSON
datatype, or just aTEXT
datatype; the effect is similar.) In this column you would list only the readings that you have information for. Meanwhile, the 'regular' readings (pulse, temp, etc) would have their own columns. You might have a 'real' column that says "there is some bloodwork reading that is out-of-range" as a clue to look into it.Yes, JSON has hassles when rummaging inside it. But I claim that EAV is even worse -- for size, for speed, for coding.
In your Comment you mentioned adding 10-20 new readings. The necessary
ALTER TABLE .. ADD COLUMN .. NULL, ADD COLUMN .. NULL;
would be a one-time nuisance. But I still prefer it to either EAV or JSON.Another thing to note:
INDEXes
are not always useful. When you have values that are mostly missing (columns, EAV, or JSON), they may be costly to search. If you would like to show us some of the queries, we can discuss further.For the simple query of "SELECT * FROM Readings WHERE userID = 123" to build graphs, then the optimal schema is a single Readings table with or without JSON, and without EAV. And I gave the optimal
PRIMARY KEY
.