Supertype/Subtype
How about looking into the supertype/subtype pattern? Common columns go in a parent table. Each distinct type has its own table with the ID of the parent as its own PK and it contains unique columns not common to all subtypes. You can include a type column in both parent and children tables to ensure each device can't be more than one subtype. Make an FK between the children and the parent on (ItemID, ItemTypeID). You can use FKs to either the supertype or subtype tables to maintain the desired integrity elsewhere. For example, if the ItemID of any type is allowed, create the FK to the parent table. If only SubItemType1 can be referenced, create the FK to that table. I would leave the TypeID out of referencing tables.
Naming
When it comes to naming, you have two choices as I see it (since the third choice of just "ID" is in my mind a strong anti-pattern). Either call the subtype key ItemID like it is in the parent table, or call it the subtype name such as DoohickeyID. After some thought and some experience with this, I advocate calling it DoohickeyID. The reason for this is that even though there could be confusion about the subtype table really in disguise containing Items (rather than Doohickeys), that is a small negative compared to when you create an FK to the Doohickey table and the column names don't match!
To EAV or not to EAV - My experience with an EAV database
If EAV is what you truly have to do, then it's what you have to do. But what if it weren't what you had to do?
I built an EAV database that is in use in a business. Thank God, the set of data is small (though there are dozens of item types) so the performance is not bad. But it would be bad if the database had more than a few thousand items in it! Additionally, the tables are so HARD to query. This experience has led me to really desire to avoid EAV databases in the future if at all possible.
Now, in my database I created a stored procedure that automatically builds PIVOTed views for each and every subtype that exists. I can just query from AutoDoohickey. My metadata about the subtypes has a "ShortName" column containing an object-safe name suitable for use in view names. I even made the views updateable! Unfortunately, you cannot update them on a join, but you CAN insert to them an already-existing row, which will be converted to an UPDATE. Unfortunately, you cannot update only a few columns, because there is no way to indicate to the VIEW which columns you want to update with the INSERT-to-UPDATE conversion process: a NULL value looks like "update this column to NULL" even if you wanted to indicate "Don't update this column at all."
Despite all this decoration to make the EAV database easier to use, I still don't use these views in most normal querying because it is SLOW. Query conditions are not predicate pushed all the way back to the Value
table, so it has to build an intermediate result set of all the items of that view's type before filtering. Ouch. So I have many, many queries with many, many joins, each one going out to get a different value and so on. They perform relatively well, but ouch! Here's an example. The SP that creates this (and its update trigger) is one giant beast, and I'm proud of it, but it is not something you want to ever try to maintain.
CREATE VIEW [dbo].[AutoModule]
AS
--This view is automatically generated by the stored procedure AutoViewCreate
SELECT
ElementID,
ElementTypeID,
Convert(nvarchar(160), [3]) [FullName],
Convert(nvarchar(1024), [435]) [Descr],
Convert(nvarchar(255), [439]) [Comment],
Convert(bit, [438]) [MissionCritical],
Convert(int, [464]) [SupportGroup],
Convert(int, [461]) [SupportHours],
Convert(nvarchar(40), [4]) [Ver],
Convert(bit, [28744]) [UsesJava],
Convert(nvarchar(256), [28745]) [JavaVersions],
Convert(bit, [28746]) [UsesIE],
Convert(nvarchar(256), [28747]) [IEVersions],
Convert(bit, [28748]) [UsesAcrobat],
Convert(nvarchar(256), [28749]) [AcrobatVersions],
Convert(bit, [28794]) [UsesDotNet],
Convert(nvarchar(256), [28795]) [DotNetVersions],
Convert(bit, [512]) [WebApplication],
Convert(nvarchar(10), [433]) [IFAbbrev],
Convert(int, [437]) [DataID],
Convert(nvarchar(1000), [463]) [Notes],
Convert(nvarchar(512), [523]) [DataDescription],
Convert(nvarchar(256), [27991]) [SpecialNote],
Convert(bit, [28932]) [Inactive],
Convert(int, [29992]) [PatchTestedBy]
FROM (
SELECT
E.ElementID + 0 ElementID,
E.ElementTypeID,
V.AttrID,
V.Value
FROM
dbo.Element E
LEFT JOIN dbo.Value V ON E.ElementID = V.ElementID
WHERE
EXISTS (
SELECT *
FROM dbo.LayoutUsage L
WHERE
E.ElementTypeID = L.ElementTypeID
AND L.AttrLayoutID = 7
)
) X
PIVOT (
Max(Value)
FOR AttrID IN ([3], [435], [439], [438], [464], [461], [4], [28744], [28745], [28746], [28747], [28748], [28749], [28794], [28795], [512], [433], [437], [463], [523], [27991], [28932], [29992])
) P;
Here's another type of automatically-generated view created by another stored procedure from special metadata to help find relationships between items that can have multiple paths between them (Specifically: Module->Server, Module->Cluster->Server, Module->DBMS->Server, Module->DBMS->Cluster->Server):
CREATE VIEW [dbo].[Link_Module_Server]
AS
-- This view is automatically generated by the stored procedure LinkViewCreate
SELECT
ModuleID = A.ElementID,
ServerID = B.ElementID
FROM
Element A
INNER JOIN Element B
ON EXISTS (
SELECT *
FROM
dbo.Element R1
WHERE
A.ElementID = R1.ElementID1
AND B.ElementID = R1.ElementID2
AND R1.ElementTypeID = 38
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 40
AND B.ElementID = R2.ElementID2
AND R2.ElementTypeID = 38
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 38
AND B.ElementID = R2.ElementID2
AND R2.ElementTypeID = 3122
) OR EXISTS (
SELECT *
FROM
dbo.Element R1
INNER JOIN dbo.Element R2 ON R1.ElementID2 = R2.ElementID1
INNER JOIN dbo.Element C2 ON R2.ElementID2 = C2.ElementID
INNER JOIN dbo.Element R3 ON R2.ElementID2 = R3.ElementID1
WHERE
A.ElementID = R1.ElementID1
AND R1.ElementTypeID = 40
AND C2.ElementTypeID = 3080
AND R2.ElementTypeID = 38
AND B.ElementID = R3.ElementID2
AND R3.ElementTypeID = 3122
)
WHERE
A.ElementTypeID = 9
AND B.ElementTypeID = 17
The Hybrid Approach
If you MUST have some of the dynamic aspects of an EAV database, you could consider creating the metadata as if you had such a database, but instead actually using the supertype/subtype design pattern. Yes, you would have to create new tables, and add and remove and modify columns. But with the proper pre-processing (like I did with my EAV database's Auto views) you could have real table-like objects to work with. Only, they wouldn't be as gnarly as mine and the query optimizer could predicate push down to base tables (read: perform well with them). There would just be a one join between the supertype table and the subtype table. Your application could be set to read the metadata to discover what it is supposed to do (or it can use the auto-generated views in some cases). This protects your application code from having to be touched extensively just to add or modify things.
Or, if you had a multi-level set of subtypes, just a few joins. By multi-level I mean when some subtypes share common columns, but not all, you could have a subtype table for those that is itself a supertype of a few other tables. For example, if you are storing information about Servers, Routers, and Printers, an intermediate subtype of "IP Device" could make sense.
I will give the caveat that I haven't yet made such a hybrid supertype/subtype EAV-metatable-decorated database like I'm suggesting here yet to try out in the real world. But the problems I've experienced with EAV are not small, and doing something is probably an absolute must if your database is going to be large and you want good performance without some crazy expensive gigantic hardware.
In my opinion, the time spent automating the use/creation/modification of real subtype tables would ultimately be best. Focusing on flexibility driven by data makes the EAV sound so attractive (and believe me I love how when someone asks me for a new attribute on an element type I can add it in about 18 seconds and they can immediately start entering data on the web site). But flexibility can be accomplished in more than one way! Pre-processing is another way to do it. It's such a powerful method that so few people use, giving the benefits of being totally data-driven but the performance of being hard-coded.
(Note: Yes those views really are formatted like that and the PIVOT ones really do have update triggers. :) If someone is really that interested in the awful painful details of the long and complicated UPDATE trigger, let me know and I'll post a sample for you.)
And One More Idea
Put all your data in one table. Give columns generic names and then reuse/abuse them for multiple purposes. Create views over these to give them sensible names. Add columns when a suitable-data-type unused column is not available, and update your views. Despite my length going on about subtype/supertype, this may be the best way.
Here are four approaches that each have advantages and disadvantages depending on what the current bottleneck is.
Normalize by adding a surrogate key column
--Sql Server syntax (My Oracle knowledge seems to have
-- rusted due to lack of use)
CREATE TABLE orders (
order_id int not null primary key,
order_date DateTime not null,
source_code char(10) not null,
amount decimal(10,2) not null
)
SELECT
order_id,
CASE
WHEN source_code LIKE '%A11' THEN 'Acme 2011'
WHEN source_code like '%A12' then 'Acme 2012'
END advertising_campaign
FROM orders
Transforms into:
CREATE TABLE orders (
order_id int not null primary key,
order_date DateTime not null,
source_code_key int not null,
-- source_code char(10) not null,
amount decimal(10,2) not null
)
CREATE TABLE source_codes (
source_code_key int not null primary key,
Advertising_campaign varchar(20) not null )
INSERT INTO source_codes ( advertising_campaign)
SELECT DISTINCT
CASE
WHEN source_code LIKE '%A11' THEN 'Acme 2011'
WHEN source_code like '%A12' then 'Acme 2012'
END advertising_campaign
FROM orders
SELECT
order_id,
source_codes.advertising_campaign
FROM orders INNER JOIN source_codes
ON orders.source_code_key = source_codes.key
This approach is the cleanest from a pure dimensional modeling perspective. The logic in your CASE statements would be applied in an ETL process and the size on disk is reduced. In Sql Server the CPU overhead is increased because a join is typically more cpu intensive then a CASE statement (I assume this is also true for Oracle). If I/O is the bottleneck then this is a trade-off worth making and is the preferred approach.
Join on Business Key to Lookup table
CREATE TABLE orders (
order_id int not null primary key,
order_date DateTime not null,
source_code char(10) not null,
amount decimal(10,2) not null
)
CREATE TABLE source_codes (
source_code char(10) not null primary key,
Advertising_campaign varchar(20) not null )
INSERT INTO source_codes (source_code, advertising_campaign)
SELECT DISTINCT
source_code,
CASE
WHEN source_code LIKE '%A11' THEN 'Acme 2011'
WHEN source_code like '%A12' then 'Acme 2012'
END advertising_campaign
FROM orders
SELECT
order_id,
source_codes.advertising_campaign
FROM orders INNER JOIN source_codes
ON orders.source_code = source_codes.source_code
This approach would require one row in source_codes for each distinct source_code. Size on disk increases due to source_codes table. Disk IO remains the same or increases depending on the ability of the source_codes table to fit in memory. This approach makes sense only in the case where the queries are CPU bound due to logic in the CASE statement that is horrendously complex.
Keep Existing Schema
If the bottleneck is the CPU and the cost of computing the CASE statements is less than the cost of joining to a another table then keeping your existing queries may be the best bet.
Keep Existing Schema + Move CASE logic into client application/reporting tool
This approach makes sense if the bottleneck is network bandwidth to the client or the CPU load is too high with either the JOIN or the CASE statements. By moving the complex CASE logic off the server you can distribute CPU load. This would be the most intrusive solution and should be considered last resort.
Best Answer
You could look at using Entity-Attribute-Value design.
This consists of a table with three columns, one for the user id, one for the type of attribute, and one for the attribute value.
With this design, you can have an unlimited number of attributes without needing to implement new columns for new attributes.
Many architects will say this design cannot scale well, and they'd mostly be right; however if implemented correctly without going overboard on the number of attributes, and with proper indexing and efficient column types, it can work well.
For instance: