There are some details about these options over on the LTTng Project site. RCU's are (read-copy-update). These are data structures in the kernel which allow for the same data to be replicated across cores in a multi-core CPU and they guarantee that the data will be kept in sync across the copies.
excerpt
liburcu is a LGPLv2.1 userspace RCU (read-copy-update) library. This
data synchronization library provides read-side access which scales
linearly with the number of cores. It does so by allowing multiples
copies of a given data structure to live at the same time, and by
monitoring the data structure accesses to detect grace periods after
which memory reclamation is possible.
Resources
So what are these options?
This option sets hooks on kernel / userspace boundaries and puts RCU
in extended quiescent state when the CPU runs in userspace. It means
that when a CPU runs in userspace, it is excluded from the global RCU
state machine and thus doesn't try to keep the timer tick on for RCU.
Unless you want to hack and help the development of the full dynticks
mode, you shouldn't enable this option. It also adds unnecessary
overhead.
If unsure say N
This option controls the fanout of hierarchical implementations of
RCU, allowing RCU to work efficiently on machines with large numbers
of CPUs. This value must be at least the fourth root of NR_CPUS, which
allows NR_CPUS to be insanely large. The default value of RCU_FANOUT
should be used for production systems, but if you are stress-testing
the RCU implementation itself, small RCU_FANOUT values allow you to
test large-system code paths on small(er) systems.
Select a specific number if testing RCU itself. Take the default if
unsure.
This option forces use of the exact RCU_FANOUT value specified,
regardless of imbalances in the hierarchy. This is useful for testing
RCU itself, and might one day be useful on systems with strong NUMA
behavior.
Without RCU_FANOUT_EXACT, the code will balance the hierarchy.
Say N if unsure.
This option permits CPUs to enter dynticks-idle state even if they
have RCU callbacks queued, and prevents RCU from waking these CPUs up
more than roughly once every four jiffies (by default, you can adjust
this using the rcutree.rcu_idle_gp_delay parameter), thus improving
energy efficiency. On the other hand, this option increases the
duration of RCU grace periods, for example, slowing down
synchronize_rcu().
Say Y if energy efficiency is critically important, and you don't care
about increased grace-period durations.
Say N if you are unsure.
Use this option to reduce OS jitter for aggressive HPC or real-time
workloads. It can also be used to offload RCU callback invocation to
energy-efficient CPUs in battery-powered asymmetric multiprocessors.
This option offloads callback invocation from the set of CPUs
specified at boot time by the rcu_nocbs parameter. For each such CPU,
a kthread ("rcuox/N") will be created to invoke callbacks, where the
"N" is the CPU being offloaded, and where the "x" is "b" for RCU-bh,
"p" for RCU-preempt, and "s" for RCU-sched. Nothing prevents this
kthread from running on the specified CPUs, but (1) the kthreads may
be preempted between each callback, and (2) affinity or cgroups can be
used to force the kthreads to run on whatever set of CPUs is desired.
Say Y here if you want to help to debug reduced OS jitter. Say N here
if you are unsure.
So do you need it?
I would say if you don't know what a particular option does when compiling the kernel then it's probably a safe bet that you can live without it. So I'd say no to those questions.
Also when doing this type of work I usually get the config file for the kernel I'm using with my distro and do a comparison to see if I'm missing any features. This is probably your best resource in terms of learning what all the features are about.
For example in Fedora there are sample configs included that you can refer to. Take a look at this page for more details: Building a custom kernel.
The make localmodconfig
command is still the right tool for the job. In fact make localmodconfig
runs scripts/kconfig/streamline_config.pl
.
File input
When reading the streamline_config.pl
(perl) source code, there is an undocumented feature my $lsmod_file = $ENV{'LSMOD'};
that allows file input for loaded module detection instead of the output from the lsmod
command.
Live CD
Because localmodconfig uses the output lsmod
to detect the loaded modules. We run a Ubuntu Live CD on each of the different hardware setups, open a terminal (Ctrl+Alt+T), run lsmod
and save its output.
Concatenate output
By concatenating the lsmod
output files while stripping consecutive headers lines you can quickly create an input file that covers all your required kernel modules. We like to review the module list by hand and use a more manual recipe:
$ cd linux-3.11.0/
or go the directory where you will run your make command
$ lsmod > lsmod.txt
creates a text file with your loaded modules
$ nano lsmod.txt
will open the nano text editor, of course you can use your favorite editor application
Append your desired modules that are not already there, to the bottom of this file (see for an example the bottom of this anwer), and save it when you are ready.
Note: use spaces not tabs to match the column tabulator positions.
$ make LSMOD="lsmod.txt" localmodconfig
this will tell localmodconfig to use your lsmod.txt file as input for loaded modules detection
With regards to Steven Rostedt - the author of steamline_config.pl - for suggesting a shorter notation in step 5.
Example for what to append and not append to lsmod.txt (step 4):
Because the Intel D33217CK main board has Intel thermal sensors that we would like to read, we append these lines:
x86_pkg_temp_thermal 13810 0
intel_powerclamp 14239 0
But we don't want to run virtual machines on this hardware, that is why we skip these lines:
kvm_intel 128218 0
kvm 364766 1 kvm_intel
It has an Apple (Broadcom) Gibabit ethernet adapter connected to its Thunderbolt port, so we append:
tg3 152066 0
ptp 18156 1 tg3
pps_core 18546 1 ptp
We think we don't need volume mirroring, and therefor do not add:
dm_mirror 21715 0
dm_region_hash 15984 1 dm_mirror
dm_log 18072 2 dm_region_hash,dm_mirror
And we also don't need graphics output (text will do on a headless server), so we do not include:
i915 589697 3
i2c_algo_bit 13197 1 i915
drm_kms_helper 46867 1 i915
drm 242354 4 i915,drm_kms_helper
For another machine we need this Realtek ethernet driver aditionally:
r8169 61434 0
mii 13654 1 r8169
Best Answer
bc
is used during the kernel build to generate time constants in header files. You can see it invoked inKbuild
, where it processeskernel/time/timeconst.bc
to generatetimeconst.h
.This could be implemented as a C program which is built and run during the build, but it’s easier to use
bc
(which is small and common; in fact it’s part of the set of tools which are mandatory on a POSIX systems — the kernel does expect GNUbc
though).bc
is used here instead of Perl. The commit message suggests thatbc
was used previously, but I can’t find a trace of that; Perl has been used since 2008 (much to some people’s chagrin, although that patch set was never merged).