I believe you are referring to IO operations for processing purposes, and I'll attempt to give a simplified layman answer.
Assume the processor is a meat-grinder in a factory, and assume RAM, hard disk are like the conveyor belt system feeding unprocessed meat to the grinder to be ground.
Assume the conveyor belt has two parts -> the slow-but-wide part, and the fast-but-narrow part. The former alludes to the hard disk big storage but slow speed, and the latter is referring to memory's small storage but high speed characteristics.
So...
HARD DISK CONVEYOR (WIDE BUT SLOW) -> RAM CONVEYOR (NARROW BUT FAST) -> GRINDER (PROCESSOR)
When your increase your RAM, it is like widening the RAM conveyor, thus the grinder can potentially receive much more at one go for processing.
If your RAM is low, it means that while the RAM conveyor is fast, it is extremely narrow, thus the volume of meat pouring into the grinder is little. At the same time, meat might potentially choke at the hard disk conveyor points (in short meat that is supposed to be on the RAM conveyor in a well-optimized system is actually still on the hard disk conveyor - a.k.a paging/swap file).
To sum an answer all up in a hopefully easy to understand sentence :
The relationship between RAM and processor and why programs run faster is simply because with more RAM, more data to be processed can get to the processor faster.
If the size of the system memory is equivalent to how wide the RAM conveyor is, then the Frontside Bus (FSB) is equivalent to how fast the RAM conveyor goes.
Whew! Hope this answers your question!
In older systems, the front-side bus (FSB) was synchronously tied to the northbridge and memory controller. This meant that, without the use of clock dividers (introducing complicated and expensive PLL circuitry to keep control of the different clock rates), your memory bus would operate at the FSB speed. In your case, DDR-400 was the answer, since DDR-400 memory modules have a clock rate of 200 MHz.
Now, as history progressed, systems that still used an FSB now had a clock divider between itself and the memory controller. This allowed for the use of different memory speeds independent of the FSB speed (so if we set the FSB to 400 MHz, and had a clock ratio of 1:2, the memory would run at 400 * 1 / 2 = 200 MHz).
I assume that since this isn't a computer architectures course, and since there was only one answer, it was implicitly implied that the system did not have a clock divider. If it did (and indeed, nearly all computers since the late 90's did), we could simply solve the ratio to make any of the above listed memory modules work with the computer.
To make DDR-333 work for example, we need a memory clock of 166 MHz, or a clock divider of 5:6. For DDR3-667, we need a memory I/O clock (not memory speed, DDR3 is different) of 333 MHz, or 5:3. Finally, PC100 would work with a divider of 1:2 for a memory clock of 100 MHz.
TL,DR: Without a memory clock divider, the FSB has to match the memory clock speed. With a clock divider, so long as you can create an integer ratio X:Y to match the memory:FSB speeds, then you can use that memory module (and that ratio can be satisfied for all of the memory modules listed in your question).
Best Answer
DDR is just an acronym for Double Data Rate.
Compared to single data rate (SDR) SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals. Implementations often have to use schemes such as phase-locked loops and self-calibration to reach the required timing accuracy.
With data being transferred 64 bits at a time, DDR SDRAM gives a transfer rate of (memory bus clock rate) × 2 (for dual rate) × 64 (number of bits transferred) / 8 (number of bits/byte). Thus, with a bus frequency of 100 MHz, DDR SDRAM gives a maximum transfer rate of 1600 MB/s.