Josh,
This is a very common task for all DBAs and the right answer is NOT the same for every one and for each server. As lot of other things, it depends on what you need.
Most definitely you don't want to run "Shrink Database" as already suggested. Its EVIL to performance and the below ref will show you why. It causes disk and as well as index fragmentation and this can lead to performance issues. You are better off by pre-allocationg a big size for the data and log files so that autogrowth will NOT kick-in.
I didn't understand your #2. selected tables full backup. Can you elaborate more on this?
Coming to Index reorganize, update statistics and index rebuilds, you need to be careful on how you do this otherwise you will end up using more resources and also end up with performance issues.
When you rebuild indexes the statistics of the indexes are updated with fullscan but if you do update statistics after that, then those will be updated again with a default sample (which depends on several factors, usually 5% of the table when table size > 8 MB) which may lead to performance issues. Depending on the edition you have, you may be able to do online index rebuilds. The right way of doing this activity is check the amount of fragmentation and depending on that either do index rebuild or index reorganize + update statistics. And also you may want to identify which tables need to update stats more frequently and try to update stats more often.
Maintenance Plans are OK but its hard to get the best out of them doing these customizations unless you can login to SSIS and tweak the MP's. that's why I prefer NOT to use them and use Ola Hallengren's free scripts that are more robust than MP's. Also, I would recommend to catch up on the referenced article by Paul Randal on this topic.
Ref: http://technet.microsoft.com/en-us/magazine/2008.08.database.aspx
This is NOT a comprehensive answer to your question but a good starting point. HTH and let us know if you have any additional questions/comments.
I don't know what resources you're getting this from. Not just that PgAdmin page given some of what you're saying. The information you're relying on is either outdated or incomplete; all this is pretty much unnecessary.
Make sure that autovacuum is keeping up with the database workload and you're pretty much done. These days you should not generally need to run a manual vacuum or analyze, though it's handy after bulk loads or deletes. Manually reindexing is certainly not required as a routine operation.
See autovacuum in the docs.
Best Answer
In current versions of PostgreSQL, you can look at the *pg_stat_activity* view to find autovacuum tasks. They will have *current_query* fields that start with "Mark autovacuum entries in pg_stat_activity that look like this:
A query to count how many of those you have might look like this:
And you can have your Python code check if the value returned is >0; if so, autovacuum is running. You might want to track down and avoid running concurrently with manual VACUUM calls, too.