So, after a bit of thought, I realized that I can just treat my User Information as another piece of data.
So I can add to each table a "ChangedBy" and a "ChangedWhen" column. Whenever my service updates a record, it will update the ChangedBy and ChangedWhen. Then in the Change Data Capture history, the changes include needed data.
If either of the comment-helpers post an answer, I'll re-mark the answer for them. I keep checking and will eventually forget to come back to check so I'm going to answer in case someone else runs into this:
The problem was exactly what Kenneth Fisher, 8bit, and Kin thought, the Transaction log was gigantic on my small database (60GB/2GB, respectively) because of failed replication.
Replication had been configured on the database but disabled for about 7 months. I assume SQL was queuing up all of the changed-data for replication once the the repl-configuration became re-enabled.
This was absolutely a case of misconfiguration. The customer had disabled replication in testing, moved away from using replication, but never went back and deleted the config, thereby, over time, creating the problem.
After going into SQL Replication in SSMS and deleting the configuration, ~3 mins later, the log file went from 60GB to 43MB.
Now, I'm not sure if it would've done that on it's own. I ended up running 'Checkpoint' on the database 2x as was previously suggested with no immediate effect. Immediate checks after those operations yielded no results. Spot-checking the log file size a few minutes later saw the dramatic difference so again, I'm not sure if I needed to run the checkpoints but ultimately, the transaction log effectively disappeared as a result.
Best Answer
Then you're going to need to fundamentally change the way your application behaves in terms of security and auditing. You'll also need to setup some sort of auditing, either in the application or database.
No, the log was not meant to be a developer tool.