Yes, 2.4GHz
vs 3.4GHz
is a big difference.
However you're comparing the same processor, and they're both capable of achieving 3.4GHz
. For the i7-4700MQ, 2.4Ghz
is the base speed and 3.4Ghz
is the turbo speed.
If the processors had otherwise different clock speeds (and all the other features were mostly the same), there would be a pretty big difference in their capability.
Note that there's other features of a processor that can out weigh small differences (ie 0.1 - 0.2Ghz
) in clock speed. Features like:
- the number of cores
- hyper-threading enabled
- cache sizes
will also affect the comparison between two processors.
Whether or not you would notice these differences also depends on what you're using the computer for. If you aren't using your computer for gaming, video/music rendering, or some other computationally-intensive task, the difference will mean very little to you.
Apart from the answer:
If you want a computer that performs well for general tasks (ie internet browsing, document editing, etc), get a computer with an SSD. Especially when you're buying a laptop, SSDs will increase battery like and drastically increase boot-time (and other drive-intensive operations). When it comes to general-purpose computing, SSDs make more of a difference than faster RAM and CPUs.
Not to judge your purchase, but both computers come with 1TB 5400RPM
drives - that is slow. Unless you actually need all that space, you will enjoy the capabilities of an SSD more.
And if you are looking to buy an HP laptop, do some research to make sure that none of the ports or internal parts are proprietary - HP is notorious for this.
Does a CPU consume less power when it is idle?
Any modern CPU: yes. And old 6502 at 980KHz or similar from that era probably would not. It always drew more or less the same current at the same voltage, and if it had nothing to do, it entered busy waiting. Essentially it was always busy, even if only doing this:
1. 'Do I have some work?'
2. 'no, then let's go back to point 1.`
However, the speeds you mentioned (800MHz and 4.0GHz) point to a modern setup, as does the term SpeedStep, which I mostly remember from early intel CPUs in laptops.
Work on a CPU usually follows this pattern:
- The instruction counter on a core is read, increased by one.
- An instruction is read from that place.
- that instruction is decoded (if needed) and acted upon.
- Usually, go back to the start.
This means a CPU is continually busy doing things. They are doing things mean state changes in transistors, which consumes power. Higher speeds mean more changes, thus more power used.
Now, if we could stop the whole CPU by using the HLT instruction when it has nothing to do, then it would draw no (or significantly less) power.
This means that you do not gain anything from having a faster CPU do the same operations in less time.
E.g.
- Slow CPU taking 20 seconds for a job, drawing 35 Watt all the time.
- Fast CPU doing the same job in 10 seconds but needing 70 Watt during this time.
Power used (CPU wise only) would come out the same in both cases.
There is a catch though, and the faster CPU often wants a higher voltage to be able to change it states faster. That means it may draw the same current, but power used is increased.
Thus it makes sense to scale back the CPU frequency (and the voltage) when it has significant periods with no productive tasks.
To answer this part:
If I were to disable speedstepping, and let my clock run at 4.0GHz all the
time; Is there a difference in power consumption when CPU cycles are spent
in an application vs. in idle cycles?
Yes, it would. If clock speed is always 4.0GHz, the voltage should always be sufficient for operation at that speed.
No lowered voltages, no power saved.
As for SpeedStep:
First, I heard of this was around the Pentium mobile era (P-2, P3's, Pentium mobiles CPU's, ...). Windows/Intel platforms from that era shipped with something called SpeedStep, allowing the OS the lower the speed or your CPU and to lower the voltage supplied our CPU.
These days more of this functionality is in hardware or with help from ACPI, and the CPU is not just lowered in speed, but it can be placed in one of several lower powered states (C-states). Some of these merely halt the execution of instructions, some power down part of the chip. This part is a lot more complex since powering down a whole core, flushing its cache before that, and shutting down its memory interface also takes time (and power). Ditto for bringing it back on-line. Modern schedulers do a complex dance with multiple cores, speed of cores, heat budgets, and power states. They do not do this because making a more complex chip is fun. They do it because they can temporarily increase speed (turbo boost) and save power.
Disabling all this and always running at the same speed negates these advantages. It is only sensible to do when you are going to push the chip to its limits (e.g., when overclocking) since it causes fewer power fluctuations.
Best Answer
The lack of difference in utilization with different clock frequencies might be due the computation not being limited by clock speed. E.g., if memory access latency or bandwidth is the main factor limiting performance, then decreasing clock frequency may not significantly reduce performance (so utilization would remain more or less constant).
Another factor might be the granularity of tracking utilization. If simple 1ms timing is used, then any fraction of the time granule could be counted as an entire granule. If the activity is frequent (80 times per second) but extremely short-lived (<1ms for each burst of activity in full speed mode--even just 500,000 CPU cycles [0.7ms at 0.7GHz] can accomplish some work), then both clock frequencies would have the same measured utilization.
It is also possible that in low-power mode the system is doing less work. This could be a very reasonable design choice. Extra work in full speed mode might allow greater responsiveness or provide some other benefit at the cost of energy efficiency. In low-power mode, energy efficiency would be more aggressively sought.