N.B. This has been editied since its original posting.
You may want to look at duplicating the database to a new name provided you have all the required backups and archivelogs to restore to the required point in time.
I am assuming that ASM is involved here, if not you might need to look at the duplicate command to see how to rename the files as it's done automatically by ASM.
If your using standard os files rather than ASM then you might need to include this in your duplicate statement once you've connected to the atarget and auxiliary databases (see below);
run
{
SET NEWNAME FOR DATAFILE 1 TO '/oradata/instance/file1.dbf';
SET NEWNAME FOR DATAFILE 2 TO '/oradata/instance/file1.dbf';
<repeat for all the files in your Database>
DUPLICATE TARGET DATABASE to newdb;
}
Now the theory should be;
Create a new pfile (from the existing database and rename the various references to the database), making sure you change any log destinations and the directories exist
Start the new instance in a no mount state
Then start
rman
Connect target sys@instance
Connect auxiliary /
Once the two connections are up.
The duplicate command to restore to a point in time should be as follows
DUPLICATE TARGET DATABASE TO newdb UNTIL TIME TO_DATE('20/07/2011 19:45:00','DD/MM/YYYY HH24:MI:SS')
-- This should be prior to when you created the Test table so the database will be restored upto that point in time. It theory at least.
another option i do find useful is to resize the logfile and groups
DUPLICATE TARGET DATABASE TO newdb
LOGFILE
GROUP 1 ('/oradata/instance/redo01a.log',
'/oradata/instance/redo01b.log') SIZE 200K REUSE,
GROUP 2 ('/oradata/instance/redo02a.log',
'/oradata/instance/redo02b.log') SIZE 200K REUSE;
or with ASM something like
DUPLICATE TARGET DATABASE TO newdb
LOGFILE
GROUP 1 ('+DATA',
'+FRA') SIZE 200K REUSE,
GROUP 2 ('+DATA',
'+FRA') SIZE 200K REUSE;
There are several options you can also specify, I believe one is to an scn, sequence or until a date time. You might need to do a to_date convertion for the last option. putting these together should help you get what you're after.
if your database is a serious production database, it should be running in archivelog mode, no question about that. This is assuming that users are constantly doing transactions and that you don't want to lose them. If your database is only bulk loaded once a day, an export dump might be a valid solution as backup, if you know how to use it to re-create a new database, once you lost the prod database.
For now, logon as a database administrator on your server and switch to archivelog mode
- sqlplus / as sysdba
- shutdown immediate;
- startup mount;
- alter database archivelog;
- alter database open;
database is open for production and you can make online backups. This does not mean your database is out of the danger zone, there are still a few things that need to be setup to keep it running without problems.
First thing that will happen is that the filesystem where your archivelog files are written will get filled up, causing a full halt of the database.
best advice: hire a dba to do the job for you, it is easier to make a disaster than to prevent, or even worse, to recover from one. Without the proper config recovery can be close to impossible. There are lots of consultancy firms that offer adhoc services via some pre-paid system. Maybe giving the application to a service provider might be smarter. This al depends on what you do and what kind of protection against disaster[s] you want.
second best: start reading the docs Oracle® Database 2 Day DBA
Best Answer
Use RMAN to back up to the network share. An export is NOT a true backup as you cannot perform a point-in-time recovery with it. XE comes with backup scripts that you can just schedule with
AT
or similar, the documentation says:If you are running the database in archivelog mode, then the archived redo logs are your "continuous export", so you would ideally want to have those on seperate disks from the DBFs, and hopefully mirrored. RMAN will back those up too. As you say you have a single disk, well, this is a judgement call you will need to make - I would say if the data is valuable enough to back up, it is valuable enough to run on server-grade hardware.