First of all, what type of security model do you plan to implement? Role-based Access Control (RBAC) or Discretionary Access Control (DAC)?
RBAC in the Role-Based Access Control
(RBAC) model, access to resources is
based on the role assigned to a user.
In this model, an administrator
assigns a user to a role that has
certain predetermined right and
privileges. Because of the user's
association with the role, the user
can access certain resources and
perform specific tasks. RBAC is also
known as Non-Discretionary Access
Control. The roles assigned to users
are centrally administered.
DAC In the Discretionary Access
Control (DAC) model, access to
resources is based on user's identity.
A user is granted permissions to a
resource by being placed on an access
control list (ACL) associated with
resource. An entry on a resource's ACL
is known as an Access Control Entry
(ACE). When a user (or group) is the
owner of an object in the DAC model,
the user can grant permission to other
users and groups. The DAC model is
based on resource ownership.
see source
1) In RBAC: you need ElementType table to assign rights to role (users are assigned to role(s)). RBAC defines: "What can this role/user do". Administrator assigns rights for roles and permissions to roles, assigns users to role(s) to access resources.
2) In DAC: users and roles have rights to elements via access control list (ownership). DAC defines: "who has access to my data". User (owner) grants permissions to owned resource.
Any way I suggest this data model:
CREATE TABLE ElementType
(
Id (PK)
Name
...
)
CREATE TABLE ElementBase
(
Id (PK)
Type (FK to ElementType)
...
)
(one to one relationship)
CREATE TABLE Element_A
(
Id (PK, FK to ElementBase)
...
)
CREATE TABLE Element_B
(
Id (PK, FK to ElementBase)
...
)
1) RBAC (many-to many relationship)
CREATE TABLE ElementType_To_Role_Rights
(
RightId (PK)
RoleId (FK to Role)
ElementTypeId (FK to ElementType)
...
)
2) DAC (many-to many relationship)
CREATE TABLE ElementBase_To_Actor_Rights
(
RightId (PK)
ElementBaseId (FK to ElementBase)
ActorId (FK to Actor)
...
)
CREATE TABLE Actor
(
Id (PK)
Name
)
CREATE TABLE User
(
Id (PK, FK to Actor)
Password
...
)
CREATE TABLE Role
(
Id (PK, FK to Actor)
...
)
Some issues to bring up:
For OLTP, inserts will be really slow in a table that wide
You will be wasting a lot of space by repeating redundant information
Columnstore is a non-modifiable index type so you can't use it in an OLTP environment
You greatly complicate referential integrity controls this way. You can't just make foreign keys to make sure you are getting valid values for fields.
Indexing will be a nightmare
The real issue here is the developers not understanding design.
Keeping the client data in it's native format is FINE. I do this kind of thing for a living, and I get tables with 500+ fields all the time. The way to handle it is to separate your RAW
data from your BUILT
data.
If the client gives you a massively wide table, you need to normalize it yourself to make a usable data set. There's nothing stopping you from creating a process that breaks out that data into appropriate tables.
Best Answer
In general, a good RDBMS will handle both solutions the same way and you should not see performance differences. However, you do need to test with your data load on your (R)DBMS. MySQL in particular does not always follow the relational rule/mind set and your results might therefor vary.
There are two main differences that I can think of:
You cannot declare referential integrity constraints across databases.
If you split the data in several databases, restore from a backup might require a smaller downtime if you can restore the most important database first and bring up the application partially functioning. (This requires the App to be written in a way that it can gracefully handle partial database availability.) You could for example keep one month worth of data in a small active database and move older data to a larger archive database that also could live on cheaper (read: slower) storage. However, depending on your RDBMS, you might be able to achive the same without spreading across multiple databases. With SQL Server for example you can implement a similar solution within a single database by separating tables in different file groups. SQL Server allows a restore to happen one file group at a time. Once a file group is restored you can access its data. Accessing a table living on a not-yet-restored file group will fail with an error comparable to the one you get when accessing a normal database that has not yet finished restoring.