I have sensitive information in multiple databases on the MySQL Server. I need to record which user have inserted or updated the record and incase of update we should be able to see the previous and the new values.
Is it possible in MySQL?
MySQLmysql-5.5
I have sensitive information in multiple databases on the MySQL Server. I need to record which user have inserted or updated the record and incase of update we should be able to see the previous and the new values.
Is it possible in MySQL?
According to your question, you use select * from MyTable where business_key = 1234 and parent = id;
to get the last record.
There is caveat you need to be aware of with regard to the index. If a particular business_key has 1000 records, the query will do a index range scan through all 1000 key entries. Since you know that parent=id
indicates the last record, it would plausible for you to change the query to take advantage of that. Why? The index you would automatically have the ids sorted for any given business_key,parent combination. With that in mind, please change two things
select B.* from
(
select MIN(id) id FROM MyTable
where business_key = 1234
) A INNER JOIN MyTable B using (id);
Keep in mind that MAX(id) would be the original (the oldest) record.
ALTER TABLE MyTable ADD INDEX NewerIndx (business_key,id);
If you have queries that have this WHERE clause (WHERE business_key=1234 AND parent=999
), then do not drop NewIndx. You may want to drop your old index on a staging or dev server and test all your queries, if any of the queries get worse on the dev server, keep your old index.
Give it a Try !!!
To accomplish what you are wanting to do, it is possible to use the FEDERATED
storage engine on both servers, in conjunction with triggers, to allow each server to update the other server's database.
This is not exactly a simple out-of-the-box solution, because it requires additional precautions and requires you to decide whether consistency or isolation tolerance is more important and allow the queries to fail when the other server isn't available (more consistency) or use a CONTINUE HANDLER
to suppress errors (isolation tolerance).
But here is an extremely simplified example.
Each server would have the identical configuration.
The local user table:
CREATE TABLE user (
username varchar(64) NOT NULL,
password varbinary(48) NOT NULL, /* encrypted of course */
PRIMARY KEY(username)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
A local table that is federated to the user table on the other server.
CREATE TABLE remote_user (
username varchar(64) NOT NULL,
password varbinary(48) NOT NULL, /* encrypted of course */
PRIMARY KEY(username)
) ENGINE=FEDERATED DEFAULT CHARSET=utf8 CONNECTION='mysql://username:pass@the_other_host:port/schema/user';
Selecting from remote_user on one server will retrieve the records from the other server, and insert/update/delete on that table will change data on the other server.
So, we create triggers do accomplish the purpose of updating the distance server. They are written as BEFORE
triggers, with the idea being that we don't want to do something to ourselves that we can't do to the other server -- for example, if a username already exists on the other server, but not here, we want the insert on the other server to throw an error that prevents us from creating the user here... as opposed to creating a user here with what would be a conflicting username. This is, of course, one of the tradeoff decisions you'll need to make.
DELIMITER $$
CREATE TRIGGER user_bi BEFORE INSERT ON user FOR EACH ROW
BEGIN
INSERT INTO remote_user (username,password) VALUES (NEW.username,NEW.password);
END $$
CREATE TRIGGER user_bu BEFORE UPDATE ON user FOR EACH ROW
BEGIN
UPDATE remote_user
SET username = NEW.username,
password = NEW.password
WHERE username = OLD.username;
END $$
CREATE TRIGGER user_bd BEFORE DELETE ON user FOR EACH ROW
BEGIN
DELETE FROM remote_user
WHERE username = OLD.username;
END $$
DELIMITER ;
This is not a perfect solution and is not a high-availability solution, because it relies on solid connectivity between the two systems and even if you are using InnoDB and transactions, the actions you take against the target table are not part of your local transaction and cannot be rolled back.
I use the FEDERATED
engine quite a bit; it comes in handy for a number of creative purposes in my environment, including one situation where I used a federated query launched by a trigger to impose foreign key constraints against a foreign data source; however, I restrict its use to back-end processes where unexpected issues such as timeouts, coding errors, or server-to-server network/outage/isolation events cannot result in the end user on one of our web sites experiencing any kind of problem. Your ability to tolerate such a situation would be a major determining factor into whether this is an appropriate solution.
An alternative would be to configure your two servers in master/master replication. For this, you would need to use different database names on each server, so that for most events that replicate, the two servers could not possibly conflict with each other. In the worst-case scenario, if you lose connectivity or encounter a replication error, the two sites would still be running independently and you could resynchronize and recover. Configuration would look something like this:
database_a database for site A
database_b database for site B
database_c database for only the shared table(s)
Then, in database_a and database_b:
CREATE ALGORITHM=MERGE SQL SECURITY INVOKER VIEW user AS SELECT * FROM c.user;
MySQL will treat database_a.user and database_b.user as aliases for the "real" user table, database_c.user, so you would not have to change your application other than to use its designated database (i.e, you wouldn't have to configure it to understand that the user table was actually in a different schema, because the view will function pretty much transparently with this configuration). If the schemas have foreign keys against the user table, you would declare those against the true base table database_c.user.
Configure the two servers to replicate everything, but set auto_increment_increment
and auto_increment_offset
appropriately so you do not have conflicting auto-increment values on the shared table(s), if your tables use auto-increment. (Note, the documentation says that these variables are for NDB
tables only, but that's not accurate).
An extra advantage of this setup is that your two servers would then have a complete duplicate of the other site's data that you could potentially use to your advantage for recovery from hardware failure in one of the servers.
Best Answer
Usually for this purpose, I add a column in the table saying end user who updated it, and each client populates it when it it inserted/updated.
But if want to track each changes, usually I also add 2 datetime(6) columns in the table called StartDate and EndDate (use DateTime for mysql version before 5.6 not supporting microseconds)
EndDate is added to primary index (preferably at the end).
When row I insert a row into it, I put Sysdate(6) in StartDate and '9999-12-31' in EndDate (year 9999, yes it is hardcoded but if your application is still active at this time, it won't be your problem)
When I want to update an existing row, I close it first with a query like update MyTable set EndDate=sysdate(6), UpdatedBy='currentuser' where key=somekeyvalue and EndDate='9999-12-31' then I insert row with new values as explained earlier
When I look to current values, I always add " and EndDate='9999-12-31') in query
When I look to values applicable on a certain date, ie 2017-02-01 I perform query on a range of dates like this select * from MyTable where key=somekeyvalue and StartDate<='2017-02-01' and EndDate>'2017-02-01'