Bytes per track depends totally on how the manufacturer laid out the disk internally, which you will not know. All modern disks use LBA (logical block addressing), in which the OS addresses the drive on a sector-by-sector basis, not knowing or caring how or where the sectors are physically located on the platters (nor how many platters there actually are).
Not only that, but the number of sectors per track depends on how far out from the spindle motor you are at the time; it's not a spiral like on a CD. The further from the spindle you are, the more sectors per track (and thus the higher the transfer rate).
The following diagram explains it, but note that the diagram is partially incorrect -- instead of the sectors getting bigger as you go outwards from the spindle, the sectors remain the same size and there are more of them per track the further from the spindle you go (which causes your number of bytes per track, sectors per track, etc to go up).
Since the heads will be over a certain track for only one revolution, and you don't know where on disk you are, you cannot know if the next track will have more or fewer sectors, and therefore your transfer rate will fluctuate.
That said, it will only fluctuate if you are reading directly from disk and not out of cache; modern drives have advanced caching algorithms which will prefetch content it thinks you'll ask for next. As a result, if you were measuring transfer rate, you have no idea if it's coming off the platters or out of cache, making such measurements unrepeatable and totally useless.
In other words, you don't. Period.
Best Answer
You cannot know this because you don't know where physically on disk any given file is; if the sectors containing the file (contiguous or not) are closer to the spindle, transfer rate will be much slower than if the file is physically located in sectors closer to the outer edge of the platter.
In addition, cluster size is irrelevant in this context; this is the minimum number of sectors used to store a file, and is a factor of the filesystem, not the disk.
UPDATED to answer questions in comments:
Rotational delay is almost always present; the disk has to turn enough such that the first sector of the cluster is under the read head before you can start reading it. If the file size is equal to 1.5 times the cluster size, the file will occupy 2 clusters, and the "extra" 0.5 clusters will be wasted (referred to as "slack space") -- decreasing cluster size helps reduce slack space, at the expense of more I/O overhead to account for the greater number of clusters the filesystem must keep track of. A cluster is the smallest unit of space in the filesystem ("tail packing" notwithstanding).
More info on tail packing: en.wikipedia.org/wiki/Block_suballocation
Most people store a mix of file sizes in a filesystem, which is why a middle-of-the-road cluster size (such as 8KB or 16KB) is usually the default. A given filesystem's default cluster size depends on the size of the filesystem being created.
If you know you'll be storing loads of tiny files on a filesystem, you set up the filesystem with a small cluster size to minimize slack space (as long as the associated performance hit isn't an issue for your application). If you know you'll be storing loads of huge files (such as videos) on a filesystem, you set up the filesystem with a large cluster size to reduce I/O overhead from bookkeeping all the cluster allocation info (such as the MFT in NTFS).