You should be aiming to auto-grow as little as possible. Seven times a day is excruciating, even with instant file initialization.
Don't do a Shrink Database. Ever. Shrinkfile, maybe, but only after an extraordinary event. Shrinking it just to grow again is an exercise in futility and should actually be called auto-fragment.
If recovery model is simple, there is no way on earth you should need to grow your log file by 250 GB. The used space in the file will clean itself out automatically over time, unless you started a transaction a month ago and have no intentions of ever committing it or rolling it back.
So my advice would be:
Auto-grow the data file manually during a quiet period to a size that will accommodate several months of growth. What are you saving it for in the meantime?
Set the auto-growth increment for the data file to something relatively small (so that it doesn't interrupt users when it does happen), and alert on this event (you can catch it in the default trace, for example, or through extended events). This can tell you that you are hitting the high point you estimated and it is time to grow manually again. At this point you will want to keep this manual in case you want to add a new file / filegroup on a different drive to accommodate the space, since eventually you will fill the current drive.
Auto-grow the log file to, say, twice the largest it's ever been. It shouldn't auto-grow further unless there is some abnormal transaction holding things up. You should monitor for this event as well, so that you know about them.
If either of the comment-helpers post an answer, I'll re-mark the answer for them. I keep checking and will eventually forget to come back to check so I'm going to answer in case someone else runs into this:
The problem was exactly what Kenneth Fisher, 8bit, and Kin thought, the Transaction log was gigantic on my small database (60GB/2GB, respectively) because of failed replication.
Replication had been configured on the database but disabled for about 7 months. I assume SQL was queuing up all of the changed-data for replication once the the repl-configuration became re-enabled.
This was absolutely a case of misconfiguration. The customer had disabled replication in testing, moved away from using replication, but never went back and deleted the config, thereby, over time, creating the problem.
After going into SQL Replication in SSMS and deleting the configuration, ~3 mins later, the log file went from 60GB to 43MB.
Now, I'm not sure if it would've done that on it's own. I ended up running 'Checkpoint' on the database 2x as was previously suggested with no immediate effect. Immediate checks after those operations yielded no results. Spot-checking the log file size a few minutes later saw the dramatic difference so again, I'm not sure if I needed to run the checkpoints but ultimately, the transaction log effectively disappeared as a result.
Best Answer
The logic for the disk usage report is baked into SSMS and while we can't know what the RDL looks like (and if any filtering is done) I grabbed this query sent by SSMS 2016 using Profiler:
This isn't substantially different to Aaron's script and I don't see how it could return different results. It looks to me that even if the traces roll over, it will still be iterating all of them (and they wouldn't roll over 5x in the space of a few minutes or even hours of testing).