What you describe is a variant of "Materialized Path", see for example https://communities.bmc.com/docs/DOC-9902 for an example. I say variant because you have no separator token, but are instead using a fixed number of positions for each different level.
In Materialized Path you usually don't have an explicit parent/child relationship represented, instead it is encoded into the "path" string. To determine sub-categories, use like as in your example. For ancestor-categories you need some procedural part that splits the path-string. Direct parent is the first ancestor. Direct children can be located by using a combination of like and length() function. Root and leaf predicates are easy to define as well.
As a starting point, why would you like to add a parent-child relationship (if it is not for the operations described above)? If the answer is referential integrity it will be a bit backward since you already have the domain at hand (your existing table). Adding a separate relation (table) to represent the parent/child relationship would be something like:
create table ...
( category varchar(...) not null
, constraint ...
foreign key ... references ...
, parent_category varchar(...) not null
, constraint ...
foreign key ... references ...
)
You could have a trigger populate this table after insert of a new "category"
But it does not really protect you from anything, it would act more like a report of your encoded parent child relation.
The other option is to add a nullable attribute to your existing table, your "top-categories" would have null as a parent, and the others would have substr(code, 1, length(code)-2)
as a parent. This would be some kind of mix between "adjacency list" and "materialized path". You would still have to protect against anomalies like ('01020304', NULL) and ('010203','0202'). It is possible to add triggers that validates this.
If you want to keep your current table as is it is probably easiest to add validation code, either in triggers or in your application that makes sure that there exists a "super-category".
If you are willing to split the code in parts you could have a look at "Nested Sets". It is described in the link I gave above, or as an additional transitive closure relation, see for example http://karwin.blogspot.se/2009/04/sql-antipatterns-strike-back-slides.html. A slight variation of the same method is described at: http://dustbite.se/tree/ . Both "Nested Set" and "Transitive Closure" will add a significant overhead to what is probably the most common operation that you do
Personally I would stick with what you have and add validation triggers that checks that the codes are consistent during modifications of the "tree"
As Oracle XE is mentioned as an alternative DBMS in @VĂ©race post, I just point out that other commersial vendors is offering free versions as well. See Microsoft SQL Server 2014 Express (http://msdn.microsoft.com/en-us/evalcenter/dn434042.aspx) and DB2 Express-C ( http://www-01.ibm.com/software/data/db2/express-c/download.html). DB2 does not have a space restriction like Oracle, I'm not familiar with the restrictions of SQL-server.
Do not store dates in a separate table unless there is something about a particular date that has additional information that you need to store. In general the date is a simple attribute of some other entity, it isn't an entity in itself unless you have a very peculiar application.
One exception to this might be a star schema data warehouse where there is a time dimension table that might use date as the primary key and have attributes that describe the way that date might roll up - assuming that some roll ups might not be trivial to calculate (like fiscal reporting period or some such).
You can index a date column just as easily as an integer column, so there is nothing to be gained by abstracting out a date so that you can give it an integer alias value.
You can also put a date column in a WHERE clause so searching for records by date is not at all onerous to do.
Best Answer
What you are doing is called denormalization.
The rule of thumb is "normalize till it hurts, denormalize till it works." Denormalization should follow the access pattern of your queries. If you have a common query that goes from joining on 10 tables to just a few with just a small bit of duplication, go for it!