I'll be the first to admit the documentation is not very clear on this. They state that you should be offloading your backups to the secondaries, but while most statements are made in a general sense, I think they really mean, specifically, log backups (and copy_only
backups, if you have some need for those).
You should occasionally run full backups on the primary IMHO. The copy_only
restriction is about full backups, not log backups, and only applies to the secondary AFAIK.
You will not be able to use shrinkfile against your current transaction logs because they are full of activity that has technically not been backed up. Once you take a full (non-copy_only
) backup on the primary, then let one log backup run, you should be able to shrink the log file manually. Currently your log backups are working because the database is set to full, but they are continuing to grow because (presumably) you've never run a proper full backup on the primary.
This should be a one-time operation, and you shouldn't shrink them too small; you need to set your primary to be backing up regularly, and you need to accommodate for the largest set of activity that will occur between full or log backups. I won't go on and on about how shrinking files only so they can grow again is a futile exercise and guarantees poor performance, but I could. :-)
No. After you restored the database in your development environment it had become a different database. Re-attaching it back to production will replace the production database and all transaction that occurred after your initial detach will be lost. Doing what you describe usually require very complex solutions, involving setting up replication from production to staging and contiguously replicating transactions in production to the staging server. Needless to say, developing/testing such a complex solution is only worth it for a ... worthy goal. Shrinking your database is not such a worthy goal.
Handling situation of runaway data that was never designed for delete (which is 99% of all projects, since the need for efficient delete of data is never evident during development, when the tables are empty) is quite hard actually. Partitioning is the best solution, by far, but is a huge undertaking with serious impact on the application. Other than that, deleting old data in small batches is the next best thing, provided an appropriate supporting index is provisioned.
Best Answer
No physical truncation actually occurs, the size of the file would not get smaller.
Parts of the logfile (virtual logfile/vlf) are merely flagged as 'the content is allowed to be overwritten'. If such an event exists, and I know of none, there would be no practical duration.
If you are referring to 'auto-shrink', I would highly recommend you read this post 'Auto-shrink – turn it OFF!' by Paul Randal.