I currently have the following two tables in my database
CREATE TABLE folders ( id NUMBER NOT NULL, name VARCHAR2(100) NOT NULL, bytes NUMBER DEFAULT 0 NOT NULL, parentId NUMBER, CONSTRAINT pk_folders PRIMARY KEY (id), CONSTRAINT fk_folders_parent FOREIGN KEY(parentId) REFERENCES folders(id) ); CREATE TABLE files ( id NUMBER NOT NULL, name VARCHAR2(100) NOT NULL, bytes NUMBER DEFAULT 0 NOT NULL, folderId NUMBER NOT NULL, CONSTRAINT fk_folders FOREIGN KEY (folderId) REFERENCES folders(id) );
There are also row triggers on the files table to update the bytes column of the folder when a row in files is inserted, deleted, or updated. This is in order to keep track of the total size of the folder. The triggers work fine in this case and successfully update the folder with the total bytes of all files contained in that folder.
CREATE OR REPLACE TRIGGER ari_files AFTER INSERT ON files FOR EACH ROW BEGIN UPDATE folders f SET f.bytes = f.bytes + :NEW.bytes WHERE f.id = :NEW.folderid; END ari_files;
I would like to also have the folders update their parent as well so that each folder would keep the sum of its files + any files contained in subfolders. The problem I am running into is when I try to add a trigger on the folders table to update the bytes value for its parentId. I'm getting a mutating table error (ORA-04091) because the triggers on the folders table are also updating the folders table (although it is always going to be a different row and there won't be any loops in the hierarchy)
CREATE OR REPLACE TRIGGER aru_bytes_folders AFTER UPDATE OF bytes ON folders FOR EACH ROW WHEN (OLD.parentId IS NOT NULL) BEGIN UPDATE folders f SET f.bytes = f.bytes + (:NEW.bytes - :OLD.bytes) WHERE f.id = :OLD.parentId; END aru_bytes_folders;
Is there something that can be done to get this to work?
The closest that I've gotten so far is creating a temporary table and having row triggers populate the table and then having a statement trigger to read from the temporary table and process the update on its parent (which would trigger more updates). The problem that I ran into with that was that the temporary table was containing previous updates that had been processed already and the database was getting stuck in a loop.