As per Information Center:
"If this option is specified when restoring from an online backup
image, error SQL2537N will be returned."
So you you are left with rolling forward. That being said, you could try the following
db2 rollforward db mydb to end of backup and complete
According to the information for rollforward command
END OF BACKUP
Specifies that all partitions in the partitioned database should be rolled forward to the minimum recovery time. See Examples section
below for an example.
This should restore whatever it is you are attempting to restore up to the point of the end of the backup and not include any rolling forward through transactions logs that may be occurring after the backup. Normally you still have to issue at least this statement for a restore from an offline backup where you use archived logs.
I personally haven't tried nor had the need to do an online and only rollforward to the end of the backup. Give it a shot and let us all know what happens.
EDIT: To answer your questions below: You must have archival logging
enabled in order to take online backups. You can still take offline
backups with archival logging. You do not have to switch back to
circular to do so. However, once you have archival logging enabled,
regardless of whether you are restoring from an offline or an online
backup, you must perform a rollforward when you are done. No way
around it. See my own previous question on this.
In order to set up archival logging, you will need to alter the
database configuration parameter LOGARCHMETH1
. By default, this is
set to OFF
, which means circular logging. Your choices for it are as
follows:
DISK
TSM
USEREXIT
VENDOR
LOGRETAIN
LOGRETAIN
tells DB2 to leave the archived logs in the logging
directory. It will not clean up any of these logs for you (leaving you
as the DBA to do so). USEREXIT
leaves it up to a non-IBM third party
tool to take care of storing and archiving your logs (usually hand coded in C). Not too
many people use this anymore from what I understand. DISK
and TSM
seem
to be used a fair amount. DISK
tells DB2 where you want to archive
your logs off to (although with a few other parameters set, DB2 can
automatically clean up logs no longer needed-I'll show you that
below). TSM
tells DB2 to let IBM's Tivoli Storage Manager to archive
off the logs. There are a few other settings you have to get into for
that to work. Where I work, we haven't tried that yet.
And lastly, VENDOR
is used to use a third-party vendor product to archive off your logs.
To set up archival logging of to a different disk spot and have DB2
automatically clean things for you, you can do the following
(NOTE:these are in Unix path)
db2 connect to mydb user <admin user> using <admin user password>
#NOTE THE double and single quotes needed on the command line in Unix
db2 "update db cfg using logarchmeth1 'DISK:/dbarch/dblogs'"
db2 update db cfg using num_db_backups 2
db2 update db cfg using rec_his_retentn 2
db2 update db cfg using auto_del_rec_obj on
db2 terminate
This will tell DB2 to keep 2 full backups for at least two full days.
Two backups have been kept for two days and another one is taken, the
oldest backup will be deleted by DB2. DB2 will also then clean up any
logs that were between the now deleted backup and the next oldest
retained backup.
Also note: once you switch to archival logging, DB2 will require you
to immediately take an offline backup as your next action (ie, the
database will be placed into BACKUP PENDING
mode) and you will be
unable to do anything else until you do so.
Hope this helps!
The message you include from your db2diag.log shows that there is a potential space problem (i.e. the file system holding your data is filling up). The monitoring data here shows that you are ranging between 89% and 94% over a 90 minute period with increases and decreases – this is probably associated with system temporary tables in the database. It may be worthwhile to increase the size of the filesystem(s) holding the data since you're getting very close to running out of space.
This space utilization, however, has nothing to do with how much I/O activity (IOPS) the database is generating.
Generally when you have an I/O problem where you see a much different usage pattern than normal. You mention that normally there the system is generating 300 IOPS, but lately it's been generating 5000 IOPS.
Assuming that the I/O activity can be traced to the file system(s) holding DB2 data, then you need to monitor the database during these periods of high I/O to find out what queries are active and are causing lots of physical I/O, and then do some investigation to figure out why. You can use the db2top
utility as well as many other monitoring tools to figure this out.
At that point it's a tuning exercise. Are your bufferpools too small? Was an index dropped? Or is the cause a new query that needs to be reviewed / tuned?
Best Answer
Since you are indeed interested in the transaction log sizing, not performance, you will get better information observing the actual log usage instead of the number of transactions.