Supertype/Subtype
How about looking into the supertype/subtype pattern? Common columns go in a parent table. Each distinct type has its own table with the ID of the parent as its own PK and it contains unique columns not common to all subtypes. You can include a type column in both parent and children tables to ensure each device can't be more than one subtype. Make an FK between the children and the parent on (ItemID, ItemTypeID). You can use FKs to either the supertype or subtype tables to maintain the desired integrity elsewhere. For example, if the ItemID of any type is allowed, create the FK to the parent table. If only SubItemType1 can be referenced, create the FK to that table. I would leave the TypeID out of referencing tables.
Naming
When it comes to naming, you have two choices as I see it (since the third choice of just "ID" is in my mind a strong anti-pattern). Either call the subtype key ItemID like it is in the parent table, or call it the subtype name such as DoohickeyID. After some thought and some experience with this, I advocate calling it DoohickeyID. The reason for this is that even though there could be confusion about the subtype table really in disguise containing Items (rather than Doohickeys), that is a small negative compared to when you create an FK to the Doohickey table and the column names don't match!
To EAV or not to EAV - My experience with an EAV database
If EAV is what you truly have to do, then it's what you have to do. But what if it weren't what you had to do?
I built an EAV database that is in use in a business. Thank God, the set of data is small (though there are dozens of item types) so the performance is not bad. But it would be bad if the database had more than a few thousand items in it! Additionally, the tables are so HARD to query. This experience has led me to really desire to avoid EAV databases in the future if at all possible.
Now, in my database I created a stored procedure that automatically builds PIVOTed views for each and every subtype that exists. I can just query from AutoDoohickey. My metadata about the subtypes has a "ShortName" column containing an object-safe name suitable for use in view names. I even made the views updateable! Unfortunately, you cannot update them on a join, but you CAN insert to them an already-existing row, which will be converted to an UPDATE. Unfortunately, you cannot update only a few columns, because there is no way to indicate to the VIEW which columns you want to update with the INSERT-to-UPDATE conversion process: a NULL value looks like "update this column to NULL" even if you wanted to indicate "Don't update this column at all."
Despite all this decoration to make the EAV database easier to use, I still don't use these views in most normal querying because it is SLOW. Query conditions are not predicate pushed all the way back to the Value
table, so it has to build an intermediate result set of all the items of that view's type before filtering. Ouch. So I have many, many queries with many, many joins, each one going out to get a different value and so on. They perform relatively well, but ouch! Here's an example. The SP that creates this (and its update trigger) is one giant beast, and I'm proud of it, but it is not something you want to ever try to maintain.
CREATE VIEW [dbo].[AutoModule]
AS
--This view is automatically generated by the stored procedure AutoViewCreate
SELECT
ElementID,
ElementTypeID,
Convert(nvarchar(160), [3]) [FullName],
Convert(nvarchar(1024), [435]) [Descr],
Convert(nvarchar(255), [439]) [Comment],
Convert(bit, [438]) [MissionCritical],
Convert(int, [464]) [SupportGroup],
Convert(int, [461]) [SupportHours],
Convert(nvarchar(40), [4]) [Ver],
Convert(bit, [28744]) [UsesJava],
Convert(nvarchar(256), [28745]) [JavaVersions],
Convert(bit, [28746]) [UsesIE],
Convert(nvarchar(256), [28747]) [IEVersions],
Convert(bit, [28748]) [UsesAcrobat],
Convert(nvarchar(256), [28749]) [AcrobatVersions],
Convert(bit, [28794]) [UsesDotNet],
Convert(nvarchar(256), [28795]) [DotNetVersions],
Convert(bit, [512]) [WebApplication],
Convert(nvarchar(10), [433]) [IFAbbrev],
Convert(int, [437]) [DataID],
Convert(nvarchar(1000), [463]) [Notes],
Convert(nvarchar(512), [523]) [DataDescription],
Convert(nvarchar(256), [27991]) [SpecialNote],
Convert(bit, [28932]) [Inactive],
Convert(int, [29992]) [PatchTestedBy]
FROM (
SELECT
E.ElementID + 0 ElementID,
E.ElementTypeID,
V.AttrID,
V.Value
FROM
dbo.Element E
LEFT JOIN dbo.Value V ON E.ElementID = V.ElementID
WHERE
EXISTS (
SELECT *
FROM dbo.LayoutUsage L
WHERE
E.ElementTypeID = L.ElementTypeID
AND L.AttrLayoutID = 7
)
) X
PIVOT (
Max(Value)
FOR AttrID IN ([3], [435], [439], [438], [464], [461], [4], [28744], [28745], [28746], [28747], [28748], [28749], [28794], [28795], [512], [433], [437], [463], [523], [27991], [28932], [29992])
) P;
Here's another type of automatically-generated view created by another stored procedure from special metadata to help find relationships between items that can have multiple paths between them (Specifically: Module->Server, Module->Cluster->Server, Module->DBMS->Server, Module->DBMS->Cluster->Server):
CREATE VIEW [dbo].[Link_Module_Server]
AS
-- This view is automatically generated by the stored procedure LinkViewCreate
SELECT
ModuleID = A.ElementID,
ServerID = B.ElementID
FROM
Element A
INNER JOIN Element B
ON EXISTS (
SELECT *
FROM
dbo.Element R1
WHERE
A.ElementID = R1.ElementID1
AND B.ElementID = R1.ElementID2
AND R1.ElementTypeID = 38
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 40
AND B.ElementID = R2.ElementID2
AND R2.ElementTypeID = 38
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 38
AND B.ElementID = R2.ElementID2
AND R2.ElementTypeID = 3122
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
INNER JOIN dbo.Element C2 ON R2.ElementID2 = C2.ElementID
INNER JOIN dbo.Element R3 ON R2.ElementID2 = R3.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 40
AND C2.ElementTypeID = 3080
AND R2.ElementTypeID = 38
AND B.ElementID = R3.ElementID2
AND R3.ElementTypeID = 3122
)
WHERE
A.ElementTypeID = 9
AND B.ElementTypeID = 17
The Hybrid Approach
If you MUST have some of the dynamic aspects of an EAV database, you could consider creating the metadata as if you had such a database, but instead actually using the supertype/subtype design pattern. Yes, you would have to create new tables, and add and remove and modify columns. But with the proper pre-processing (like I did with my EAV database's Auto views) you could have real table-like objects to work with. Only, they wouldn't be as gnarly as mine and the query optimizer could predicate push down to base tables (read: perform well with them). There would just be a one join between the supertype table and the subtype table. Your application could be set to read the metadata to discover what it is supposed to do (or it can use the auto-generated views in some cases). This protects your application code from having to be touched extensively just to add or modify things.
Or, if you had a multi-level set of subtypes, just a few joins. By multi-level I mean when some subtypes share common columns, but not all, you could have a subtype table for those that is itself a supertype of a few other tables. For example, if you are storing information about Servers, Routers, and Printers, an intermediate subtype of "IP Device" could make sense.
I will give the caveat that I haven't yet made such a hybrid supertype/subtype EAV-metatable-decorated database like I'm suggesting here yet to try out in the real world. But the problems I've experienced with EAV are not small, and doing something is probably an absolute must if your database is going to be large and you want good performance without some crazy expensive gigantic hardware.
In my opinion, the time spent automating the use/creation/modification of real subtype tables would ultimately be best. Focusing on flexibility driven by data makes the EAV sound so attractive (and believe me I love how when someone asks me for a new attribute on an element type I can add it in about 18 seconds and they can immediately start entering data on the web site). But flexibility can be accomplished in more than one way! Pre-processing is another way to do it. It's such a powerful method that so few people use, giving the benefits of being totally data-driven but the performance of being hard-coded.
(Note: Yes those views really are formatted like that and the PIVOT ones really do have update triggers. :) If someone is really that interested in the awful painful details of the long and complicated UPDATE trigger, let me know and I'll post a sample for you.)
And One More Idea
Put all your data in one table. Give columns generic names and then reuse/abuse them for multiple purposes. Create views over these to give them sensible names. Add columns when a suitable-data-type unused column is not available, and update your views. Despite my length going on about subtype/supertype, this may be the best way.
I'd say it depends on how many parameters you have, and whether this list is expected to be largely static or if it's likely to change or expand.
I'm still a little fuzzy on exactly what you're doing, so I'm going to answer a slightly different question. Let's say you're writing a query engine, searching for users that meet different requirements, and you want to store a set of predicates so it can be reused. In a simple world, your table might look like this:
CREATE TABLE Searches
(
ID INT NOT NULL PRIMARY KEY,
CreatedBy INT NOT NULL, -- Prob also CreatedDate, LastModified, etc.
FirstName VARCHAR(100),
LastName VARCHAR(100),
Sex CHAR(1),
EMailAddress VARCHAR(100),
Country INT
)
Some day you'll need to add a Region
field when you start storing more precise geographic data. Then you'll add PhoneNumber
, and then BloodType
, and then... eventually, you'll need something more flexible. You need a table for searches and a table to store key-value pairs for each search:
CREATE TABLE Searches
(
ID INT NOT NULL PRIMARY KEY,
CreatedBy INT NOT NULL
)
CREATE TABLE SearchTerms
(
ID INT NOT NULL PRIMARY KEY,
SearchID INT NOT NULL FOREIGN KEY REFERENCES Searches (ID),
FieldName VARCHAR(100) NOT NULL,
Value VARCHAR(100) NOT NULL,
UNIQUE (SearchID, FieldName)
)
The UNIQUE
constraint may not be appropriate. For example, it may be appropriate to have a search with Terms { FieldName = "FirstName", Value = "Bo%" } and { FieldName = "FirstName", Value = "ob%" }.
This more flexible approach loses type checking; searching for "NumArms = 2" now requires storing "2" in a VARCHAR
field. It also implies using dynamic SQL to construct your searches.
Best Answer
There is nothing wrong with nullable columns, even a lot of them, if that is what your data domain calls for.
On the other hand, the fact that you say these columns can be grouped logically implies to me that something else might be going on.
If they can be grouped logically because different sets of columns apply to different sets of rows, then you might have an entity-subtype situation.
Conversely, if columns can be grouped because they apply at different times, then you may have a normalization issue. For example, if your columns are something like "January Sales", "February Sales", etc. these should be rows in a child table.
While there is nothing innately wrong with nullable columns, neither is there anything wrong with joining. It's what RDBMS does for a living.
UPDATE:
Given additional information about the logical groups of columns:
There are two kinds of sub-typing that can be represented in a database using 1:1 relationships. If the logical groups were mutually exclusive, then the parent entity could have what is known as a partitioning attribute that tells you which one of the subtypes is applicable. However, without a partitioning attribute, it is possible to have zero, one or even multiple subtypes being applicable at the same time.
The same fundamental question applies then to what you do with this situation.
A good way to resolve it would be to look at the logical groups of columns. Are the columns in logical group A the same as in logical group B - or are the totally different? If they are different they might be best modeled in the single table with nullable fields. If they are the same, then this might be a clue that they should be multiple child rows instead.
Another clue to look at is whether it makes sense that a logical group of columns could take on a life of its own and start attracting relationships from other tables. If logical group B might sometime soon find itself with multiple child records from another table, then it might be a sign that it makes sense to promote that group to its own subtype style table.
One last thing to consider is physical implementation. If a logical subgroup is very sparsely populated, you might be able to make a case for segregating these columns into another table to optimize physical storage. This step shouldn't be done proactively. This kind of optimization should be done when performance testing proves it is necessary.
If none of these things are true, then you are probably best off leaving the nullable columns in the original table.