I hope you remember back to the days when 200MB/day was an incredible amount to have. :-)
There are some simple things and then some more complicated things.
Limit access by time of day
First of all, you can block times of day for connections by MAC address (advanced settings) on the Airport Extreme base station. This can keep machines from pulling down files, updates, etc., without your permission.
Cache data with a proxy server
Second, you can install a "caching" proxy server, such as Polipo http://www.pps.jussieu.fr/~jch/software/polipo/, which will allow multiple machines to access the same content, fed from your iMac instead of the Internet. You'll need to perform the same sort of proxy chaining as you currently do with GlimmerBlocker. You should look around for a caching proxy that offers plugins for ad blocking, etc.
Limit your bandwidth so you cannot exceed your daily transfer limit
Third, you could perform what is called "rate limiting", "traffic shaping," or "QoS" (Quality of Service) (see http://en.wikipedia.org/wiki/Traffic_shaping). You can do this by installing software (http://intrarts.com/throttled.html is one I have Googled but have not used) or by purchasing a home router that offers this option and putting it between your AE base station and the satellite router. Here's a list from CNET: http://reviews.cnet.com/routers/?filter=500563_5554972_
With this option, you could do some rough math to figure out, given your peak usage, how much bandwidth to allow to your network. I don't think the software solution will work with your iMac proxy solution since people do not connect through the iMac (but the caching proxy will help) so you may have to spend $50 or so on a router and configure the allowed bandwidth.
This will basically cause you to treat your 200MB/day satellite connection as a 18kbps modem assuming 24 hours a day usage. If you really only use it eight hours a day, you could have the equivalent of a 56kbps modem. Fudge up or down based on your comfort level. It will stink but you won't have overages, and you can always "turn it off" if you need to make a big download.
It will also allow your entire family to download video or other media. It will be self-correcting, in that it will be so painful to download high definition video that it won't be worth doing.
Sounds like nettop is what you're looking for. Start it with the help parameter to see the options.
nettop -nc -m route
seems to be the best overall traffic gauge but will need additional scripting to parse the output for you
Best Answer
I maintain a script that chooses a random server that's known to respond to pings, and performs 100 pings (one second apart). I run this script by hand as a first indicator to determine if I'm having a connectivity or signal quality problem.
The ping command I use is
ping -c 100 [server-hostname]
I chose hostnames for my script that were known to respond to ping at the time that I wrote the script, and I tried to keep the list geographically diverse (for example by using university web servers). But this sort of technique requires maintenance, because servers don't consistently allow ping (server configurations change over time), and things like hosted servers disrupt the geographic diversity issue.
I would think that Automator might be a better fit for this sort of task than Instruments, although if you're adept with scripting (shell, python, perl, etc), you could write a script to do it and use much less memory.
As for your situation, the source(s) of failure should dictate what kind of connectivity testing you do. The problem could be due to a piece of hardware within your home/office that needs to be periodically reset, or even replaced. The ping test I describe above doesn't necessarily isolate the source of the problem.
Edit: and to address analysis/graphing, you could perform a ping test at a regular interval (every # minutes), export the packet loss percentage data in a format such as comma-separated values, and use a spreadsheet program to graph the results.