I think it's smarter to build something that monitors the DMVs and tracks their data over time (maybe taking a snapshot of the index usage and missing index DMVs every night - might make sense to include procedure stats as well) than to run some tool for a short period whenever you think to do so. You'll get a much better picture over a longer period not only what they are right now but also how they've changed over time.
Be careful not to drop indexes that seem unused but may be hooked to some report that only gets called once or twice in a business cycle - it could be more important than its frequency might suggest.
For a 1TB DB ... Is weekly rebuild overkill?
I found it hard to believe that your OLTP system goes over the course of a single week and reshuffles 1TB of data so bad as to require a rebuild. Very few use cases would ever call for something so dramatic. Unless you have an explanation why do you need such high frequency I will say no, one week is way way too often.
are the rule of thumbs for re-indexing frequency, % of database affected, # indexes that should be rebuilt
Yes, the same rule of thumb that applies to almost every performance related action: you measure impact. You establish a performance baseline, you measure the deviation from it over time, and you measure the impact of reindex actions. W/o basic measurements is always going to be a shot in the dark.
As for the original question: how do you reduce the amount of log generated during reindex? There are only two viable solutions:
As you write causes mirroring backlogs/delays
it means option one is off the table as minimally logging and mirroring don't mix. That leaves only the option to rebuild less. Partition rebuild can come in handy, but only offline partition rebuild is supported. You can minimize the offline time by using fast partition switch (ie. you rebuild a copy of the data and then you switch the optimized rebuilt data).
Ultimately, for 1TB, you should never rebuild it. Your old data, which never changes, should lay at rest, compacted and archived. Current data is subject to churn and changes and you should only have to rebuild what has changed.
Best Answer
Yes, it will influence initial plan compile time as the optimizer will have many extra access paths to the data to consider.
Since you're on SQL Server 2017, loading once, and running reports, why not just use a clustered column store index instead?
That seems to be the ideal solution to your need to index every possible column combination.
Columnstore indexes - Overview