I would also recommend MySQL dump with an additional twist, place the dump file under version control, at regular intervals, I would recommend when a change is complete. That way you can also track the changes to the database and make reconstruction easier
I concur with your speculation that it's bad design practice. I'd recommend against it. Any time you do this sort of thing, it almost inevitably comes back to bite and the performance difference compared to a properly designed schema should be small.
You say that as it stands now, your process "would involve searching through the first table for all question_ids answered by the user, then searching the topics_and_questions table" but that's not really a good analysis of what should happen behind the scenes.
Properly designed, this process would involve checking an index on the first table for all questions answered by the user, joining to the second table to find the topics of the questions (and possibly a third table to find the names of the topics), but all in one query and with no full table scans.
If your first table (no table name provided) has (user_id, question_id) as its primary key (as it probably should), then finding the questions answered by each user will be a fast operation, no matter how many records are in this table. Rows in InnoDB are stored in primary key order, so there's no "searching though" the entire table that needs to happen... the storage engine can go directly to where that user's records are without a full table scan. If (user_id,question_id) isn't the primary key, then adding an index on (user_id,question_id) will optimize this part of the query.
In topics_and_questions
either (question_id) or (question_id,topic_id) should probably be the primary key, depending on whether a question can be in multiple topics. If it can, of course, then counting questions answered becomes more complicated.
Since you appear to be allowing a user to answer the same question more than once, there may be yet another issue to consider: you actually have to duplicate part of what you're trying to avoid, every time you update this new proposed table: You have to check whether the question just answered was one that's been answered before, or not, and only increment this counter, if it is, I would assume.
Best Answer
Just my 2 cents:
You could set up a MySQL master-slave replication, the master being inside the internal network and the slave on the external site.
You could also do a nightly mysqldump of the wiki, sftp it to the external site via a cron job and restore it there, but it looks like an ugly and clunky solution to me.
Perhaps the simplest thing would be to give up mirroring and have the wiki on the external site only, so it would be accessible in case of internal network downtime.