MacOS – ny way to hamstring the system

macosperformance

I'm looking to temporarily reduce the amount of cores and RAM available to my system, as well as the processing power.

I've written a server that uses epoll (Linux) and I want to get a rough comparison of it's performance to a server I've witten using kqueue (BSD/Darwin).

My Mac Book Pro is way more powerful than the Linux machine I'll be using, so I want to reign it in a bit.

I was thinking about creating a virtual machine and giving it the resources that I don't want available to OSX, but I'm not certain about how resource sharing works with the VM, and if this will be a reliable way of hamstringing my system. Also, this will just reduce the number of cores, not alter the performance of the CPU.

Is there an easier and possibly more reliable way to do this? Please note that I'm a bit thick when it comes to hardware.

Best Answer

Pulling the RAM seems a reliable way to constrain that resource.

Running in virtualization should help if you can tell your preferred stack to allocate less threads or cores, but I would question your main assumption that these are the critical bottlenecks for making your simulations valuable.

Why not let the simulations run full speed and just measure where the bottlenecks on the Mac arise. You can measure VM paging, IO statistics and time to run each job on your mac and then run a similar test on the production hardware.

If the code is 2 to 1 faster on the Mac and you don't have paging or IO contention, that should scale quite well to the measurements on the Linux hardware. I totally get the wish to control this and eliminate variables, but since systems need to be balanced everywhere - what is a bottleneck on one system / architecture is often not the same bottleneck on another.

Related Question