Honestly, you're basically saying "my application is using all of this data, how do I make it stop doing that?". Tell your end users or application to stop. Not going to go over well? Didn't think so.
There is an algorithm that is used to keep pages in cache, obviously these pages are used more often and aged out less. If you want to keep other pages in cache longer, use them more. If you want a table to be in cache, setup an agent job to run a select query against that tale every 2 minutes, that'll keep it with a high last used value and reference count.
If the problem is disk thrashing I would suggest asking for budget and installing more RAM. This is normal per how SQL Server works. You could also ask for faster disks if DAS/Local or have your storage team investigate to the slow nature of your disks if iSCSI/SAN/NAS. Either way the crux of your issue is either slow disk do to thrashing, slow disks in general, or not enough memory.
Lastly, I would check your plan cache to make sure it isn't bloated with a bunch of single use ad-hoc plans that aren't parameterized correctly. That could bring back a few hundred MB.
The transaction log for that database contains all transactions since the last transaction log backup, or the last time it was switched from simple recovery mode. Execute the following to get the definitive answer as to why SQL Server can not truncate the log and subsequently why the log is growing.
SELECT d.Name
,d.log_reuse_wait_desc
FROM sys.databases d
ORDER BY
d.name
If you want point in time recovery then leave the DB in the full recovery model and take frequent log backups. Each log backup will contain all transactions since the last log backup. The log backup process is also responsible for clearing the log and marking the space for reuse i.e. the next transaction made in the DB will be written to the start of the truncated log in a circular fashion. This backup and reuse of the log is what prevents the log file from growing.
If you are not interested in point in time recovery and want to simplify the administration of the database. Then set the database to the simple recovery model and do not take t-log backups. SQL Server will automatically truncate the transaction log after each transaction is committed. Meaning that once the transaction has been committed to the log the record is overwritten by the next transaction etc.
Either way, once you've made one of these two decisions you can then shrink the log file down to a more reasonable size. Note ideally you want to make it large enough so it doesn't grow but not so large that you'll need to shrink it again. Also note that you can not shrink the active part of the log.
Download and deploy https://ola.hallengren.com/ database administration solution to cover backups, index fragmentation, statistics and CHECKDB.
You might also find the 'disk usage' report returned by right clicking the DB in Object Explorer > Reports > Standard reports > 'disk usage' useful for returning the free space in the t-log.
I also recommend that you Google why it's so important to keep the log chain intact from a DR point of view, and how switching from full to simple breaks the chain leaving you exposed to data loss.
Best Answer
In order to stop your transaction log growth you need to take regular transaction log backups (along with full database backups).
If it is a test database and you don't need to recover it to a point in time in case of a disaster you can switch the database recovery model to SIMPLE (you probably are using FULL recovery because it is default in SQL Server).
Once you run regular log backups you can use
DBCC SHRINKFILE
to reduce the file size.Please read the following question: Why Does the Transaction Log Keep Growing or Run Out of Space? - it provides you with a really good understanding of the problem.