The operating system returned error 23(Data error (cyclic redundancy check).) to SQL Server during a read
This indicates the operating system detected an issue reading data from the hard disk itself. When data is written onto the filesystem, the operating system calculates a CRC code for each block written; when that data is subsequently read back the operating system performs the CRC operation again on the read data and compares it with the CRC stored in the filesystem. If these two CRC codes don't match the OS reports error 23. I'd be very surprised if there isn't a hardware issue with either the drive itself, or perhaps some other component such as the motherboard or drive controller.
WITH (ONLINE=ON)
in the Enterprise Edition of SQL Server will have no effect whatsoever on this issue. Online index operations vs offline index operations are no different at the operating system layer; data is read and written as necessary to either rebuild or recreate the index.
As you perform inserts updates and deletes, your indexes will become fragmented both internally and externally.
Internal fragmentation is you have a high percentage of free space on your index pages, meaning that SQL Server needs to read more pages when scanning the index.
External fragmentation is when the pages of the index are not in order any more, so SQL Server has to do more work, especially in IO terms to read the index.
If your indexes become too fragmented, at best, your queries will be less efficient but at worst, SQL Server will just stop using the indexes all together, meaning virtually all queries would have to perform a table scan or clustered index scan. This will hurt your performance a lot!
When you reorganise an index, then SQL Server uses the existing index pages and just shuffles data around on those ages. This will alleviate internal fragmentation and can also remove a small amount of external fragmentation. It is a lighter weight operation than rebuild and is always online.
When you rebuild an index, SQL Server actually resorts the data of the index and uses a new set of index pages. This will obviously alleviate both internal and external fragmentation but is a more heavy weight operation and by default causes the index to go offline, although it can be performed as an online operation, depending on your SQL Server version and settings.
Please do not expect to have 0 fragmentation after a Rebuild however. Unless you use a MAXDOP query hint, SQL Server will parallelise the rebuild operation and the more processors involved, the more fragmentation there is likely to be, because each processor or core, will rebuild their section or fragment of the index individually, without regard for each other. This is a trade off between best fragmentation levels and time taken to rebuild the index. For near 0 fragmentation, use MAXDOP 1 and sort the results in TempDB.
Best Answer
Also "reducing fragmentation" is not per se a performance goal. On many (most?) modern storage platforms there is little difference between sequential and random IO, which is a major historical reason for defragmenting.
I've worked on systems where the difference in throughput between sequential and random IO was 10x or more. As SQL Server attempts to scan a fragmented index, the physical file locations of the linked-list of pages jumps around every few extents, reducing the IO size, randomizing the IO, preventing read-ahead IO, and eliminating the benefit of speculative reads into the SAN controller cache.
On modern systems, the storage either has lots of spindles, or is flash-based. Both of these reduce the performance differential between small, random IOs and large, sequential IOs.
Also if tables are heavily cached, the benefit of defragmenting their storage on disk diminishes. And small rowstore tables will typically be cached.