I wouldn't add single additional files, I would at least go up by 2 or 4 each time. Not many operations will work better at cores = 5 than cores = 4. I don't think you'll get much improvement when, faced with contention, you go from 4 files to 5. I would probably go up in groups of 4 (except when you have 6-core processors, for example, and want to test limiting SQL Server to exactly 6 cores), but by 16 if you still have contention I would have to wonder if this is really getting anywhere near solving the problem - especially because your best bang for the buck is going to be placing each of those files on its own set of spindles; highly unlikely you have this many completely independent disks.
As for the MAXDOP effect, I don't think you should think about MAXDOP that way. Unless you use affinity and prevent SQL Server from even seeing the other 6 cores, this doesn't mean that all queries will use the same 6 cores. So in theory, one query could use CPUs 1-6, and the other could use 7-12, and if contention could be relieved by letting each CPU access its own file (and it actually worked out perfectly), then certain concurrent ops could benefit from 12.
Another very easy way to solve tempdb contention problems is to throw a couple of SSD drives in the box (format one to 80%, fill it with a single pre-sized tempdb file, and use the other drive as standby - SSD lifespans can vary, but keeping a chunk unallocated can help extend that life quite a bit). This is even supported in a cluster as of SQL Server 2012. Takes a lot of thinking and effort out of trying to squeeze performance out of tempdb when slow storage is always going to be a bottleneck no matter how many files you try to configure.
Some background reading that might be useful, especially if you're stuck with old-fashioned spinny disks for tempdb:
Is it possible that this frequency of spills could be a primary culprit in our high tempdb write latency?
Yes it is possible, though typically it is the average size of the spills, and how deep they go (i.e. recursive hash spills, multi-pass sorts) that matters more than the frequency per se.
SQL Server provides a wide range of metrics and DMV information to help you troubleshoot the various contributing factors to tempdb pressure, many of which are discussed in the Microsoft Technical Article, "Working with tempdb in SQL Server 2005" (applies to all versions 2005 onward).
You should be able to use the guidance and diagnostic queries contained in that document to start identifying the primary causes of any tempdb pressure. Do not disregard e.g. version store activity simply because ALLOW_SNAPSHOT_ISOLATION
is not enabled. Many features use the version store (e.g. triggers, MARS, RCSI) aside from snapshot isolation.
If sort and hash spills do turn out to be significant at a high level, you will probably need to set up some specific monitoring for this. Depending a little on your SQL Server version, this is not always a straightforward as one might hope. To connect sort and hash spills with the particular query that caused them requires Event Notifications or Extended Events. The SolidQ article, "Identifying and Solving Sort Warnings" contains details and some good general advice about resolving common causes.
You should also work with your storage team to determine how much of the high latency is attributable to your workload, how much comes from other shared uses, and what options there are for reconfiguration. Your analysis of SQL Server's metrics will help inform this discussion, as will any metrics the SAN people are able to provide.
Best Answer
A ratio of 1/4 to 1/2 times the number of TempDB data files to machine cores has long been the recommendation...
The last sentence has always been relevant. If you're not seeing contention, why add additional files? To play safe, most will add 2-4 files as a starting point for the majority of builds but beyond that, measure and react.