How could processes share memory in early versions of Unix? How does this compare with modern implementations of shared memory?
What are the differences in shared memory between early and modern Unix systems
historymemoryshared memory
Related Solutions
It doesn't work that way: the virtual machine and the host don't share the same memory. That's why it's called a virtual machine. Shared memory, as you've been using it, is an OS-level concept; you can't use it to share memory with something that's outside the control of the (guest) OS.
In principle, the virtual machine technology could offer some way to share memory between the guest and the host. But that would defeat the purpose of virtualization: it would allow guest programs to escape the virtual machine.
If you want to share data between a virtual machine and its host, use a file on the host, on a directory that's mounted in the virtual machine (e.g. through vboxsf on VirtualBox); or more generally use a file somewhere that's accessible on both sides.
Virtual memory is almost a decade older than Unix: there was one in the Burroughs B5000 in 1961. It didn't have an MMU in the modern sense (i.e. based on pages) but provided the same basic functions. IBM System/360 Model 67 in 1965 (still older than Unix) had an MMU. Intel x86 processors didn't get an MMU until the 80386 in 1986.
Implementing a Unix system doesn't actually require an MMU. It does require some form of virtual memory, otherwise implementing the fork
system call is prohibitively difficult. The fork
system call, to create processes by copying an existing process, was a fundamental part of Unix ever since the very first version, so it did require virtual memory. See D. M. Ritchie and K. Thompson, The UNIX Time-Sharing System, CACM, 1974, §V “Processes and images”.
I don't know the details of the hardware that the first Unix versions ran on, but they did have virtual memory in the form of a segmented architecture. The CPU translated between pointers dereferenced by a program (virtual addresses) and actual locations in memory (physical addresses). The mapping was performed by adding an offset to the virtual address. On each context switch between processes, the register containing the offset was adjusted.
Although virtually all Unix implementations provide process isolation, this was not the case of some historical implementations on hardware that didn't have memory protection (both in the 1970s, and also in the 1980s with MINIX on 8088 and 80286). Memory protection is somewhat orthogonal to address virtualization; an MMU provides both, a simple segmented architecture doesn't, an MPU¹ provides protection without virtualization. There is a Linux implementation for systems without an MMU, uCLinux, but due to the lack of fork
many programs can't run (the only supported of fork
is vfork
which requires an execve
call in the child immediately afterwards).
¹ An MPU (memory protection unit) records access rights for each page of memory.
Best Answer
Very early UNIX systems did not have MMUs, and so effectively, all memory in the system was shared between all processes in memory. UNIX V7 was the first one that had memory management, AFAIK. The PDP-11 did not even have a MMU when it was released; see this PDF book, page 35.
As time moved forward and MMUs became a commonplace thing, UNIX began to require it. And then memory could be separated between processes. In the 1980s we saw more IPC mechanisms, including shared memory managed by the OS (which was new in SVR1, circa 1983). SVR1 also introduced messages and semaphors, and the System V APIs are still available on modern systems for all three of these things.