The rule of thumb for TCP throughput over Wi-Fi is that you can get 50-60% of your signaling rate. So in your case, you should see 75-90 megabits per second of TCP throughput.
Wait, why is your TCP window size just 8 KibiBytes? That strikes me as an absurdly low default.
Let's see what yours should be by calculating a "Bandwidth x Delay product" for your connection.
If Windows 7 is reporting that you're getting a 150 mbps signaling rate, that's 150,000,000 bits per second, so let's use that as the bandwidth number.
As for delay, well, my average ping round trip time over Wi-Fi to my AP is a little under 3 milliseconds. But you're going from one wireless client to another, which gets relayed by the AP (to avoid the hidden node problem), so I'm guessing if you pinged one of your wireless clients from the other one, you'd get a round trip time of up to 6ms.
So 150,000,000 bits/sec of bandwidth * 0.006 seconds of delay = 900,000 bits you need to be able to put "in flight" before getting an Ack back, in order to keep the pipe full.
900,000 bits / 8192 bits per KibiByte = about 110 KibiBytes of TCP window needed. Let's be generous and make it a nice round 128.
Try adding -w 128K
to your iperf
argument lists on both the client and the server to force the TCP window to something reasonable, and see if that helps.
Since wireless is a fickle medium and there can sometimes be latency spikes due to transient noise forcing link-layer packet retransmissions, you could even try going larger than that, maybe up to 512KiB, but at some point there will be diminishing returns.
If increasing your TCP window size gets you up to around 75 megabits/sec of TCP throughput in IPerf, but if Windows 7 is defaulting your window size to 8KiB, then you probably need to figure out how to get Windows 7 to pick a better default TCP window size for all TCP connections.
Best Answer
If you are on a perfectly clean channel, with a signal strength (RSSI) of between -20dBm and -60dBm, and you have a well-optimized TCP/IP stack and application, and high-quality 802.11g chipsets, you should be able to see as much as 25 megabits/sec (30 if both ends support frame bursting).
Note that 1 meter away may be too close. At such close range, it's possible to have a signal strength above -20dBm, which could be "too hot" of a signal and overloads the receiver. High-quality chipsets might be able to handle signals as hot as 0dBm and still receive at maximum data rates, but I've seen plenty of lesser-quality chipsets that lost their top data rates at -20dBm. 2-3m away is a better choice for top data rates.
Here in almost 2012, finding high quality G gear is pretty hard, because 802.11g is from almost a decade ago. Anyone still making G-only chipsets now or in the last 3-4 years was likely only doing it to be as cheap and small and low-power as possible (for the smartphone/tablet/netbook markets, among others), which is kind of the opposite of high quality.
The companies making high-quality 802.11 chipsets in late 2011 and early 2012 are making 3x3:3, HT40 (450 megabits/sec) 802.11n gear. And even then, they spend most of their time making sure their N rates are optimal, and less time worrying about optimizing their backwards compatibility with a/b/g.
Having a well-optimized TCP stack and an app that always keeps the TCP pipe full is good too. I recommend IPerf as a simple performance tool that knows how to use TCP effectively. If you get much better performance with IPerf than you did with the app you were running, then the app you were running is probably non-optimal. See what TCP window IPerf reports your machines are using, and make sure it meets or exceeds the "'bandwidth * delay' product" for your network (you likely need something like 20KiB or larger).