ap·ro·pos
/ˌæprəˈpoʊ/ Show Spelled[ap-ruh-poh] Show IPA
–adverb
1.
fitting; at the right time; to the purpose; opportunely.
2.
Obsolete. By the way. –adjective
3.
opportune; pertinent: apropos remarks.
—Idiom
4.
apropos of, with reference to; in respect or regard to: apropos of the preceding statement.
Definition #4 is where the unix command stems from. The results returned are in reference to the input argument.
In the context of a Unix or linux process, the phrase "the stack" can mean two things.
First, "the stack" can mean the last-in, first-out records of the calling sequence of the flow of control. When a process executes, main()
gets called first. main()
might call printf()
. Code generated by the compiler writes the address of the format string, and any other arguments to printf()
to some memory locations. Then the code writes the address to which flow-of-control should return after printf()
finishes. Then the code calls a jump or branch to the start of printf()
. Each thread has one of these function activation record stacks. Note that a lot of CPUs have hardware instructions for setting up and maintaining the stack, but that other CPUs (IBM 360, or whatever it's called) actually used linked lists that could potentially be scattered about the address space.
Second, "the stack" can mean the memory locations to which the CPU writes arguments to functions, and the address that a called-function should return to. "The stack" refers to a contiguous piece of the process' address space.
Memory in a Unix or Linux or *BSD process is a very long line, starting at about 0x400000, and ending at about 0x7fffffffffff (on x86_64 CPUs). The stack address space starts at the largest numerical address. Every time a function gets called, the stack of function activatio records "grows down": the process code puts function arguments and a return address on the stack of activatio records, and decrements the stack pointer, a special CPU register used to keep track of where in the address space of the stack, the process current variables' values reside.
Each thread gets a piece of "the stack" (stack address space) for its own use as a function activation record stack. Somewhere between 0x7fffffffffff and a lower address, each thread has an area of memory reserved for use in function calls. Usually this is only enforced by convention, not hardware, so if your thread calls function after nested function, the bottom of that thread's stack can overwrite the top of another thread's stack.
So each thread has a piece of "the stack" memory area, and that's where the "shared stack" terminology comes from. It's a consequence of a process address space being a single linear chunk of memory, and the two uses of the term "the stack". I'm pretty sure that some older JVMs (really ancient) in reality only had a single thread. Any threading inside the Java code was really done by a single real thread. Newer JVMs, JVMs who invoke real threads to do Java threads, will have the same "shared stack" I describe above. Linux and Plan 9 have a process-starting system call (clone() for Linux, rfork() in Plan 9) that can set up processes that share parts of the address space, and maybe different stack address spaces, but that style of threading never really caught on.
Best Answer
Static variables are variables that exist throughout the lifetime of the program. That is, they are placed in memory allocated at compile time (as opposed to most variables, which are allocated at run time).