Supertype/Subtype
How about looking into the supertype/subtype pattern? Common columns go in a parent table. Each distinct type has its own table with the ID of the parent as its own PK and it contains unique columns not common to all subtypes. You can include a type column in both parent and children tables to ensure each device can't be more than one subtype. Make an FK between the children and the parent on (ItemID, ItemTypeID). You can use FKs to either the supertype or subtype tables to maintain the desired integrity elsewhere. For example, if the ItemID of any type is allowed, create the FK to the parent table. If only SubItemType1 can be referenced, create the FK to that table. I would leave the TypeID out of referencing tables.
Naming
When it comes to naming, you have two choices as I see it (since the third choice of just "ID" is in my mind a strong anti-pattern). Either call the subtype key ItemID like it is in the parent table, or call it the subtype name such as DoohickeyID. After some thought and some experience with this, I advocate calling it DoohickeyID. The reason for this is that even though there could be confusion about the subtype table really in disguise containing Items (rather than Doohickeys), that is a small negative compared to when you create an FK to the Doohickey table and the column names don't match!
To EAV or not to EAV - My experience with an EAV database
If EAV is what you truly have to do, then it's what you have to do. But what if it weren't what you had to do?
I built an EAV database that is in use in a business. Thank God, the set of data is small (though there are dozens of item types) so the performance is not bad. But it would be bad if the database had more than a few thousand items in it! Additionally, the tables are so HARD to query. This experience has led me to really desire to avoid EAV databases in the future if at all possible.
Now, in my database I created a stored procedure that automatically builds PIVOTed views for each and every subtype that exists. I can just query from AutoDoohickey. My metadata about the subtypes has a "ShortName" column containing an object-safe name suitable for use in view names. I even made the views updateable! Unfortunately, you cannot update them on a join, but you CAN insert to them an already-existing row, which will be converted to an UPDATE. Unfortunately, you cannot update only a few columns, because there is no way to indicate to the VIEW which columns you want to update with the INSERT-to-UPDATE conversion process: a NULL value looks like "update this column to NULL" even if you wanted to indicate "Don't update this column at all."
Despite all this decoration to make the EAV database easier to use, I still don't use these views in most normal querying because it is SLOW. Query conditions are not predicate pushed all the way back to the Value
table, so it has to build an intermediate result set of all the items of that view's type before filtering. Ouch. So I have many, many queries with many, many joins, each one going out to get a different value and so on. They perform relatively well, but ouch! Here's an example. The SP that creates this (and its update trigger) is one giant beast, and I'm proud of it, but it is not something you want to ever try to maintain.
CREATE VIEW [dbo].[AutoModule]
AS
--This view is automatically generated by the stored procedure AutoViewCreate
SELECT
ElementID,
ElementTypeID,
Convert(nvarchar(160), [3]) [FullName],
Convert(nvarchar(1024), [435]) [Descr],
Convert(nvarchar(255), [439]) [Comment],
Convert(bit, [438]) [MissionCritical],
Convert(int, [464]) [SupportGroup],
Convert(int, [461]) [SupportHours],
Convert(nvarchar(40), [4]) [Ver],
Convert(bit, [28744]) [UsesJava],
Convert(nvarchar(256), [28745]) [JavaVersions],
Convert(bit, [28746]) [UsesIE],
Convert(nvarchar(256), [28747]) [IEVersions],
Convert(bit, [28748]) [UsesAcrobat],
Convert(nvarchar(256), [28749]) [AcrobatVersions],
Convert(bit, [28794]) [UsesDotNet],
Convert(nvarchar(256), [28795]) [DotNetVersions],
Convert(bit, [512]) [WebApplication],
Convert(nvarchar(10), [433]) [IFAbbrev],
Convert(int, [437]) [DataID],
Convert(nvarchar(1000), [463]) [Notes],
Convert(nvarchar(512), [523]) [DataDescription],
Convert(nvarchar(256), [27991]) [SpecialNote],
Convert(bit, [28932]) [Inactive],
Convert(int, [29992]) [PatchTestedBy]
FROM (
SELECT
E.ElementID + 0 ElementID,
E.ElementTypeID,
V.AttrID,
V.Value
FROM
dbo.Element E
LEFT JOIN dbo.Value V ON E.ElementID = V.ElementID
WHERE
EXISTS (
SELECT *
FROM dbo.LayoutUsage L
WHERE
E.ElementTypeID = L.ElementTypeID
AND L.AttrLayoutID = 7
)
) X
PIVOT (
Max(Value)
FOR AttrID IN ([3], [435], [439], [438], [464], [461], [4], [28744], [28745], [28746], [28747], [28748], [28749], [28794], [28795], [512], [433], [437], [463], [523], [27991], [28932], [29992])
) P;
Here's another type of automatically-generated view created by another stored procedure from special metadata to help find relationships between items that can have multiple paths between them (Specifically: Module->Server, Module->Cluster->Server, Module->DBMS->Server, Module->DBMS->Cluster->Server):
CREATE VIEW [dbo].[Link_Module_Server]
AS
-- This view is automatically generated by the stored procedure LinkViewCreate
SELECT
ModuleID = A.ElementID,
ServerID = B.ElementID
FROM
Element A
INNER JOIN Element B
ON EXISTS (
SELECT *
FROM
dbo.Element R1
WHERE
A.ElementID = R1.ElementID1
AND B.ElementID = R1.ElementID2
AND R1.ElementTypeID = 38
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 40
AND B.ElementID = R2.ElementID2
AND R2.ElementTypeID = 38
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 38
AND B.ElementID = R2.ElementID2
AND R2.ElementTypeID = 3122
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
INNER JOIN dbo.Element C2 ON R2.ElementID2 = C2.ElementID
INNER JOIN dbo.Element R3 ON R2.ElementID2 = R3.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 40
AND C2.ElementTypeID = 3080
AND R2.ElementTypeID = 38
AND B.ElementID = R3.ElementID2
AND R3.ElementTypeID = 3122
)
WHERE
A.ElementTypeID = 9
AND B.ElementTypeID = 17
The Hybrid Approach
If you MUST have some of the dynamic aspects of an EAV database, you could consider creating the metadata as if you had such a database, but instead actually using the supertype/subtype design pattern. Yes, you would have to create new tables, and add and remove and modify columns. But with the proper pre-processing (like I did with my EAV database's Auto views) you could have real table-like objects to work with. Only, they wouldn't be as gnarly as mine and the query optimizer could predicate push down to base tables (read: perform well with them). There would just be a one join between the supertype table and the subtype table. Your application could be set to read the metadata to discover what it is supposed to do (or it can use the auto-generated views in some cases). This protects your application code from having to be touched extensively just to add or modify things.
Or, if you had a multi-level set of subtypes, just a few joins. By multi-level I mean when some subtypes share common columns, but not all, you could have a subtype table for those that is itself a supertype of a few other tables. For example, if you are storing information about Servers, Routers, and Printers, an intermediate subtype of "IP Device" could make sense.
I will give the caveat that I haven't yet made such a hybrid supertype/subtype EAV-metatable-decorated database like I'm suggesting here yet to try out in the real world. But the problems I've experienced with EAV are not small, and doing something is probably an absolute must if your database is going to be large and you want good performance without some crazy expensive gigantic hardware.
In my opinion, the time spent automating the use/creation/modification of real subtype tables would ultimately be best. Focusing on flexibility driven by data makes the EAV sound so attractive (and believe me I love how when someone asks me for a new attribute on an element type I can add it in about 18 seconds and they can immediately start entering data on the web site). But flexibility can be accomplished in more than one way! Pre-processing is another way to do it. It's such a powerful method that so few people use, giving the benefits of being totally data-driven but the performance of being hard-coded.
(Note: Yes those views really are formatted like that and the PIVOT ones really do have update triggers. :) If someone is really that interested in the awful painful details of the long and complicated UPDATE trigger, let me know and I'll post a sample for you.)
And One More Idea
Put all your data in one table. Give columns generic names and then reuse/abuse them for multiple purposes. Create views over these to give them sensible names. Add columns when a suitable-data-type unused column is not available, and update your views. Despite my length going on about subtype/supertype, this may be the best way.
By default, your usage of mysqldump
will lock each table as it dumps it and there will not be any consistency among tables. You will also not get any triggers, stored procedures, stored functions, or events backed up, and if any of your view definitions are invalid (referencing nonexistent tables or columns that you may have dropped or renamed since the view was defined), the dump process will terminate when that error is encountered.
If all of your tables are InnoDB, you can get a consistent snapshot by adding this:
--single-transaction
This option disables the default --lock-tables
which locks each table as it is dumped, then unlocks it when done dumping it, and instead issues the following statements to the server at the beginning of the dump (which you can observe by enabling the general query log):
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START TRANSACTION WITH CONSISTENT SNAPSHOT;
Of course, this only helps with InnoDB. If your tables are MyISAM or any other non-transactional engine, you only have one option:
--lock-all-tables
...which locks all of the tables for the entire dump, since that's the only way to ensure consistency without transactions to assist.
To back up your stored procedures, events, and triggers, add:
--routines --events --triggers
To prevent mysqldump
from failing when there's an invalid view definition
--force
To capture your binlog position and store it in the backup file, add:
--master-data=2
This option doesn't work if your machine isn't a slave.
Your question was about backing up the database "with rsnapshot" although to put a fine point on it, you're proposing backing up the database with mysqldump and then backing up the dumpfile with rsnapshot... It's hard to tell from a brief look at their site whether that's going to be a good choice, because rsnapshot appears to have file-level granularity. It doesn't appear to store diffs, but instead stores full files for "each" file that changes between backup cycles:
"The amount of space required is roughly the size of one full backup, plus a copy of each additional file that is changed."
-- http://www.rsnapshot.org/rsnapshot.html
I welcome correction if I'm wrong on this point. In this case, you only have 1 file, your dump file from mysqldump, which will of course change every time.
Whatever you do, don't entertain the thought of trying to back up the actual files (.frm, .ibd, .MYD, etc.) underlying your tables in MySQL. That does not work with the server running even though under rare conditions you might get a backup that seems to be intact. It isn't.
Update:
There's an important distinction between what mylvmbackup is doing and "backing up the actual files." To use mylvmbackup, you have to be using LVM, which can be visualized as a layer of abstraction between your filesystem and your hard drive. With LVM, you have the ability to freeze the file system and make a point-in-time snapshot of the entire filesystem.
It looks like mylvmbackup is doing a FLUSH TABLES
followed by a FLUSH TABLES WITH READ LOCK
prior to looking up the replication coordinates (if specified in configuration), which is the same process mysqldump does when --single-transaction
and --master-data
are both used. This stablizes MySQL but it does not fully quiesce InnoDB, so when a backup like this is restored, InnoDB will think it crashed and will do crash recovery... but it should be a clean recovery.
If your web site's data files were in the same filesystem, then an LVM snapshot would contain everything consistent to a single point in time, but whether having your other web site files in the same filesystem as the database is good practice (if, indeed running your web server on the same server is good practice) is another discussion.
Whatever the approach, it's vitally important in any backup strategy to periodically verify that your backups can actually be restored (on a different machine). The truly safest method would be to be sure innodb_fast_shutdown
= 0 and actually shut down MySQL but that's probably not practical.
Of course, if you want with something like rsnapshot, you could always snapshot the web site files, then snapshot the database, then snapshot the web site files again. If nothing changes in the site files while the database is being backed up, rsnapshot will use very little space for that second run. If things do change, it should be fairly trivial to understand the differences if you needed to recover.
Best Answer
Its possible to use type JSON
https://dev.mysql.com/doc/refman/8.0/en/json.html
this negates the need for a table of custom fields, however if you need to do join/look ups on this data, it will be difficult.