Proc Kcore – Structure of /proc/kcore on 64-bit Machine and Relation to Physical Memory

64bitgdblinuxmemory

Let me preface this question by saying that I've found a lot of answers for questions similar to my question but for 32-bit machines. However, I can't find anything for 64-bit machines. Please no answers with respect to 32-bit machines.

According to many sources on Stack Exchange, /proc/kcore can be literally dumped (e.g., with dd) to a file in order to get a copy of physical memory… But this clearly does not work for a 64-bit machine, for which /proc/kcore is 128TB in size.

As an aside, I note that it is possible to access only the first MB of memory through /dev/mem. This is for security reasons. Getting around this involves recompiling the kernel, which I don't want to do… nor can I do for my purposes (I have to work with the running kernel).

Ok… so, /proc/kcore is an ELF-core file dump of the physical memory and it can be viewed using gdb. For example, with:

gdb /usr/[blah]/vmlinux /proc/kcore

This I can do… but, this is not what I want to do. I would like to export the physical memory to a file for offline analysis. But I'm running into issues.

For one thing, I can't just dump /proc/kcore to a file since it's 128TB. I want to dump all of physical memory, but I don't know where it is in /proc/kcore. I only see non-zero data up until byte 3600 and then it's all zeros for as far as I have looked (about 40GB). I think this may have to do with how the memory is mapped to /proc/kcore, but I don't understand the structure and need some guidance.

More stuff I think I know: I know that only 48 bits are used for addressing, not 64 bits. This implies that there should be 248=256TB of memory available… but /proc/kcore is only 128TB, which is I think because addressing is further divided into a chunk from 0x0000000000000000 to 0x00007fffffffffff (128TB) and a chunk from 0xffff800000000000 to 0xffffffffffffffff (128TB). So, somehow this makes /proc/kcore 128TB… but is this because one of these chunks is mapped to /proc/kcore and one isn't? Or some other reason?

So, as an example, I can use gdb to analyze /proc/kcore and find, e.g., the location (?) of the sys_call_table:

(gdb) p (unsigned long*) sys_call_table
$1 = (unsigned long *) 0xffffffff811a4f20 <sys_read>

Does this mean that the chunk of memory from 0xffff8000000000000 to 0xffffffffffffffff is what is in /proc/kcore? And if so, how is this mapped to /proc/kcore? For example using

dd if=/proc/kcore bs=1 skip=2128982200 count=100 | xxd

shows only zeros (2128982200 is a little before 0xffffffffffffffff-0xffffffff811a4f20)…

Furthermore, I know how to use gcore to dump the memory of a given process for analysis. And I also know that I can look in /proc/PID/maps to see what process memory looks like… but nevertheless I still have no idea how to dump the whole physical memory… and it's kind of driving me nuts. Please help me avoid going crazy… 😉

Best Answer

After a lot more searching I think I have convinced myself that there is no simple way to get what I want.

So, what did I end up doing? I installed LiME from github (https://github.com/504ensicsLabs/LiME)

git clone https://github.com/504ensicsLabs/LiMe
cd /LiME/src
make -C /lib/modules/`uname -r`/build M=$PWD modules

The above commands create the lime.ko kernel module. A full dump of memory can be obtained by then running:

insmod ./lime.ko "path=/root/temp/outputDump.bin format=raw dio=0"

which just inserts the kernel module and the string are the parameters specifying the output file location and format... AND IT WORKED! YAY.

Related Question