BCNF decomposition excercise

database-designnormalizationrelational-theory

I have a relation R(A,B,C,G,H,I) with the functional dependencies (FDs for brevity) F = {A→B; A→C; C,G→H; C,G→I; B→H}. I want to decompose it into BCNF.

So, we have AG as candidate key. I consider the first FD A→B. I see that A is not a super key, so I determine its closure as ABCH and decompose it as R1(A,B,C,H) and R2(A,G,I).

Now, R1 satisfies the FD A→B, A→C, but violates B→H, so I decompose it further as R3(B,H) and R4(A,B,C).

Now, my question is: How do I fit satisfy the other FD C,G→I and C,G→H in R2, as R2 does not contain C or H?

Best Answer

In the relation R2(A, G, I), since A, G is a candidate key of the original relation, a minimal cover of the dependencies is A, G -> I (there are no other non-trivial dependencies that hold in it).

Then, what happened to the dependencies C,G->I and C,G->H ? Well, they are lost in this decomposition (and if you try decomposing starting with dependencies different from A->B you can find that these or other dependencies are lost as well). From the practical point of view, this means that the constraints expressed by the lost dependencies cannot be enforced in the decomposed database.

On the other hand, it is a well known fact that the classical decomposition algorithms for BCNF is not guaranteed to preserve the dependencies, differently from the synthesis algorithm for 3NF.