It looks as though you have a problem with your storage subsystem (somewhere from the drivers down to the actual disks, but it could be anywhere in that stack).
The good news:
- Most of the corruption is involved in non-clustered indexes. This means that if the underlying table is clean, the indexes can be rebuilt to fix the corruption. Object IDs 1345738380, 1761739862 (this has clustered index corruption as well), 2056692908
- It doesn't seem like any PFS/GAM/SGAM/etc system pages were damaged.
The bad news:
- There are some pages corrupt in the clustered index. This means there could (most likely will) be data loss without a recent backup. Object ID 1761739862
- You don't have a recent backup.
- You already ran checkdb with repair_allow_data_loss so there is no way to know what you've already lost or what has already been done which takes you an extra step back.
- It looks like there is an issue with the disk subsystem.
Where to go from here:
I'd start by taking a backup with continue_after_error to make sure that you have something of a record. Then restore that backup to an instance on the same patch level as the one it was taken from so that all testing can be completed on a copy of the database and not the database itself.
Use the copy and the older backup of the database you have on a second instance to see what data is lost and what may be able to be manually salvageable. This is time consuming and lengthy, but may be needed. If you're really stuck, call in a consultant to help you with this as corruption is a great thing to cut your teeth on and gives you some semblance of liability.
Check the objects associated with the corruption. If they are tables that don't have any super important data (say dictionary tables that can be rebuilt or don't change much) you might be able to get away with manually fixing the table by using the older backup to script out data.
Check your storage subsystem, make sure you have the latest drivers, firmware, etc. Check for any failed drives in the array/san/nas/etc. Double check Ethernet cables, fibre cables, switches, etc. Find the root cause of the corruption or this may happen again. Run a health check on your storage subsystem/motherboard to make sure something isn't faulty with the hardware or controllers.
Lastly, update your resume.
I would like to know if there is a way to fix this
These consistency errors may be fixable with the REPAIR_REBUILD
option of DBCC CHECKDB
:
Performs repairs that have no possibility of data loss. This can
include quick repairs, such as repairing missing rows in non-clustered
indexes, and more time-consuming repairs, such as rebuilding an index.
As Shanky's answer mentions, any DBCC
repair should also be performed inside a transaction, so you can inspect the changes before committing to them.
As always, please ensure you have a completely recoverable set of backups (including the log tail if applicable) before running the rebuild. If you have a complete set of valid backups (including the log tail as applicable) and you can afford the downtime, restoring might be the preferred option. Be sure not to overwrite the current database if you do this, just in case the restore fails, or it is not as complete as you expected. Of course, it's quite likely the restored database would contain the corruption again, depending on how and when it occurred :)
or a way to get a more detailed information about this errors
Details of the four consistency errors are in the DBCC CHECKDB
output, before the summary section at the end. You should review these to ensure you understand the problem, and what may have caused it, before attempting any repair.
You can reduce the amount of DBCC CHECKDB
output using the WITH NO_INFOMSGS
option.
Add the DBCC
error message details to your question if you need help analyzing the errors. It is important to identify and correct any underlying hardware problem that might have caused the corruption.
Depending on the details of the corruption, there may be other ways to fix the problems (such as manually rebuilding a nonclustered index).
If the repair or rebuild is successful, you will need to check the database again with DBCC CHECKDB
with the fullest set of checks supported by your version of SQL Server.
Best Answer
The repair is unrepairable (linkage issue in system table), so you would need to restore it from a good known backup to a point in time (provided the DB is in full recovery and you were taking log backups) you did the disk expansion.
You can try repair allow data loss as last resort.