A filesystem is both a reference to a tree structure of directories and files as well as a overarching structure that can be placed on a physical medium such as a hard drive or other similar types of medium.
At the end of of the day they're both layers of abstraction that people create so that things are standardized.
The directories + files analogy used is to mimic how people think, with respect to the physical world for storing items (files) inside of something (folders).
So too is a filesystem such as ext4 of fat32. Here it might not be so obvious, but the structure that this type of filesystem provides serves the same purpose, just at a lower level.
For example, a raw disk is just a sequence of bits. By creating a structure on top of it using inodes, we're able to access sections of the disk in an organized methodical fashion.
Notice the image of the inode structure (from wikipedia article titled: inode pointer structure
The structure of a filesystem representing files + directories
computer architectures
One thing you'll notice as you continue to study computer architectures is that the same concepts are used over and over again. The notion of hostnames is nested too.
.---> <-----.
| |
.------. .------.
^------------>| .com | | .net |
| '------' '------'
| ^
| |
.--------. .---------------.
| google | | stackexchange |
'--------' '---------------'
^ ^
| |
.-----. .------.
| www | | unix |
'-----' '------'
Or in programming, class inheritance (Ruby):
class Mammal
def breathe
puts "inhale and exhale"
end
end
class Cat < Mammal
def speak
puts "Meow"
end
end
jake = Cat.new
jake.breathe
jake.speak
HardwareCorrupted
show the amount of memory in "poisoned pages", i.e. memory which has failed (as flagged by ECC typically). ECC stands for "Error Correcting Code". ECC memory is capable of correcting small errors and detecting larger ones; on typical PCs with non-ECC memory, memory errors go undetected. If an uncorrectable error is detected using ECC (in memory or cache, depending on the system's hardware support), then the Linux kernel marks the corresponding page as poisoned.
DirectMap
is shown on x86, Book3s PowerPC, and S/390, and gives an indication of the TLB load, not memory use: it counts the number of pages mapped using the various supported page sizes on each platform (corresponding to different page table levels): 4KiB, 64KiB, 1MiB, 2MiB, 4MiB, 1GiB, or 2GiB pages. The TLB, or "Translation Lookaside Buffer", is a cache used to store mappings between virtual addresses (as seen by software running on your computer) and physical pages in memory (as seen by the hardware); the calculations and memory fetches involved to go from virtual to physical addresses are expensive, so caches are used to avoid needing them too often. But the TLB is small, so accessing a variety of different addresses (too many to stay in the cache) incurs a performance penalty. This penalty can be reduced by using larger pages; on the x86 architecture the traditional page size is 4KiB, but larger pages can be used when possible, and their sizes can be 2MiB, 4MiB or 1GiB.
For more detail you can look up the Wikipedia links I've included, and follow the references from there.
Best Answer
The Linux kernel has a built-in process accounting facility. It allows system administrators to collect detailed information in a log file each time a program is executed on a Linux system. Then the administrator can analyze the data in these log files and find a conclusion. To shed more light on this term, let me give few examples:
Turning on process accounting requires significant disk space. For example, on a Pentium III system with Red Hat 7.2, each time a program is executed, 64 bytes of data are written to the process accounting log file.
Process accounting commands are as follows:
More information about installation and utilization of process accounting can be found in this Linux Journal article.