Original post describes a classic deadlock scenario with a twist, namely, a single DELETE ... WITH statement, which acquires 2 different kinds of locks.
Recommendation - change table from All Page Lock
(APL) to Data Row Lock
(DRL) scheme
- clustered index will be replaced by placement index separate from data page
- index access / updates will use latches rather than logical locks
- row-level deadlock conflict is less probably (than page-level's)
Both SPIDs are executing the same sproc with the 1st DELETE ... WHERE statement demanding 2 different kinds of locks:
- initially an
UPDATE lock
on a page when scanning for a row to delete (UPDATE lock
is compatible with shared locks only but not with another UPDATE lock
or exclusive lock
);
- once a target row is found on a page, then the
UPDATE lock
must be upgraded to an exclusive lock
before deleting that row.
The DEADLOCK events posted in your original question confirm the scenario :
- SPID 134 has held an
exclusive lock
on a page while waiting to acquire an UPDATE lock
on another page i.e. it has deleted at least one row already and is continuing with searching for another (row to delete);
- SPID 166 blocks the first SPID's acquiring an
UPDATE lock
on a page while waiting to apply an exclusive lock
on another page, which has an exclusive lock
held by SPID 134 already (after deleting 1 row or more).
Yes is a good idea, if the DB is spread across different devices. ( that ideally would be on different volumes ). The percentage of the volume of corruption would be less. However to restore you still have to restore the whole DB, which in separate devices and volumes can be faster. ( I/O is spread, less concurrency )
No. You have to recover the Db(s) that the corrupted device holds. So if you have a DB that has 1 corrupt device and you have those logical connections (good practice), you just need to change on OS, the connection to the new physical device. Then restore DB.
How is you replication ? Repserver or disk mirroring? For repserver, i do not think so, your source ASE would stop functioning, and you would have to stop replication to restore the db. Disk mirroring I don't know.
Response to updated question:
My point 2 was:
- You will need to recover the whole db, unless you know exactly which objects where present on that device (and segments of it) that got corrupted, if yes then you can rebuild them manually( tables, procs, views... ). If you don't know or the labour to do it manually is too great, complete db recover.
What I mentioned about the logical connections was this, for example:
You have DB called TEST1 on the the following devices:
data01
data04
log02
data03
When you create the device with disk init, you have to give the path. But the path is not directed to the device itself, it goes to a soft link. Your data01 for example:
disk init.... physname='..sybase/data01.dat" ...
On the OS that sybase/data01.dat will be pointing to /dev/data01.dat. This way if you need on the OS to replace a corrupted device, you won't have to rebuild the database, you just create a new raw file, point data01.dat to it, and on DB restore data (LOAD). Faster process, than drop old DB, create new, load.
Well I'm pretty sure your errorlog would have some information about that corrupted device and also about your dumps had issues.
If your dumps were successfully dumped you should be able to use them on your restore (good devices). So I don't really get why you had to go back so many days. The thing is for instance if you have a corrupted device, it will manifest whenever some action tries to use it, (I/O). So your dumps would have failed i believe.
Best Answer
There is no direct way of doing it in SybaseASE (as opposed to sql server - which exposes DMV data).
You can get close to see if your
UPDATE INDEX STATISTICS
is gettingCPU
,Physical IO
and is not beingblocked
.I use below query :