Community Wiki answer generated from comments on the question by Kris Gruttemeyer
Please increase your autogrow increments. It shows you are incrementing by 1MB, this can be a nightmare for your storage system as I'm sure it's constantly expanding. You want to plan to give the DB plenty of room to grow based on it's usage and growth patterns so you avoid costly I/O operations every 1MB along the way.
Setting autogrow to 500MB will do the opposite of "killing the server", and will help as you will have more room each time you hit an autogrow event. Using 1 MB is doing much more harm than 500MB ever would. Think about it, you're growing your DB at 1MB. Every. Single. Time. That's a lot of work for your storage system and disks. Given your DB size, I'd start at anywhere between 1-5GB growth.
Here's a great article I recommend reading, specifically the first section:
I like to operate in the simplest mode possible. Unless there is a good reason to have multiple files (like tempDB, faster drives for certain DBs, etc), keep everything in one. There are plenty of reasons to have multiple files, but having them for the sake of having them will just give you a headache. Makes things much easier from a manageability standpoint as you can just expand the one data file, be done with it, and go find something else to do. :)
DB files are containers, you want to be proactive with sizing and give it room to grow without going overboard (50GB of data in a 2T DB file would be excessive and a waste of space). If I have a 2T DB file but it only actually has 1GB of data, SQL is only going to have to do work on the 1GB of data it needs, not the entire 2T data file.
Best Answer
You could create a SQL Server Agent Job which will be run daily and check the free space by using a query like below:
Maybe, it has sense to create some table and write the values there every day to have information about how the space is changed.