Cores and threads? How does it all work exactly

corecpumemoryprocessthreads

I am confused by this … if a CPU has 2 logical cores, it can run two programs 100% concurrent, yes? Otherwise, 2 programs on one CPU must be 100% time-divided (can't run independently as the same single-core must switch between contexts and such). If this is true, how is the sharing of programs between cores and threads?

For example, say I have 100 processes running on 2 cores … will the OS try and divide 50 on each core for load balance? Will they be randomly scattered?

Say I launch mspaint.exe on a quad-core Intel chip … where will it be executed from (core 1, 2, 3, 4?), and will it continue executing there until close? Basically, which logical CPU will do what with which program, and will the cores parallel-process different executions points from RAM as the program is run?

Also, what if you used 200 threads with 100 processes on 4 cores … will each thread remain between a context on the load-balanced core?

Last question: Is it truly possible to pick a specific core, or program for multi-cores directly without having a transparent daemon or the OS doing it randomly for you? How so, if all people say is "just use threads"? Is using multi-threads mapped to cores? If so, how is using a thread tailored to a core without OS intervention if threads on a single-core do not concurrently work?

Best Answer

As another user commented, it's mostly OS-dependent.

if a CPU has 2 logical cores, it can run two programs 100% concurrent, yes?

Concurrently yes, in parallel no. See: https://softwareengineering.stackexchange.com/questions/190719/the-difference-between-concurrent-and-parallel-execution

For example, say I have 100 processes running on 2 cores ... will the OS try and divide 50 on each core for load balance? Will they be randomly scattered?

Each OS has it's own scheduling algorithm.

Say I launch mspaint.exe on a quad-core Intel chip ... where will it be executed from (core 1, 2, 3, 4?), and will it continue executing there until close?

We don't know where it will be executed and it will most probably not continue executing from start to finish on the same core. Again, depends on the OS scheduler.

Is it truly possible to pick a specific core, or program for multi-cores directly without having a transparent daemon or the OS doing it randomly for you?

Apparently yes: https://stackoverflow.com/questions/663958/how-to-control-which-core-a-process-runs-on

How so, if all people say is "just use threads"? Is using multi-threads mapped to cores? If so, how is using a thread tailored to a core without OS intervention if threads on a single-core do not concurrently work?

I didn't understand the question here, but the basic idea with threads is that you create them and the OS runs using its scheduling algorithm, there's no need for you to control in which logical or physical core it will run (there may be cases you might want to do that, I'm not sure why).

Related Question