Sending kill -9 to a process doesn't require the process' cooperation (like handling a signal), it just kills it off.
You're presuming that because some signals can be caught and ignored they all involve cooperation. But as per man 2 signal
, "the signals SIGKILL and SIGSTOP cannot be caught or ignored". SIGTERM can be caught, which is why plain kill
is not always effective – generally this means something in the process's handler has gone awry.1
If a process doesn't (or can't) define a handler for a given signal, the kernel performs a default action. In the case of SIGTERM and SIGKILL, this is to terminate the process (unless its PID is 1; the kernel will not terminate init
)2 meaning its file handles are closed, its memory returned to the system pool, its parent receives SIGCHILD, its orphan children are inherited by init, etc., just as if it had called exit
(see man 2 exit
). The process no longer exists – unless it ends up as a zombie, in which case it is still listed in the kernel's process table with some information; that happens when its parent does not wait
and deal with this information properly. However, zombie processes no longer have any memory allocated to them and hence cannot continue to execute.
Is there something like a global table in memory where Linux keeps references to all resources taken up by a process and when I "kill" a process Linux simply goes through that table and frees the resources one by one?
I think that's accurate enough. Physical memory is tracked by page (one page usually equalling a 4 KB chunk) and those pages are taken from and returned to a global pool. It's a little more complicated in that some freed pages are cached in case the data they contain is required again (that is, data which was read from a still existing file).
Manpages talk about "signals" but surely that's just an abstraction.
Sure, all signals are an abstraction. They're conceptual, just like "processes". I'm playing semantics a bit, but if you mean SIGKILL is qualitatively different than SIGTERM, then yes and no. Yes in the sense that it can't be caught, but no in the sense that they are both signals. By analogy, an apple is not an orange but apples and oranges are, according to a preconceived definition, both fruit. SIGKILL seems more abstract since you can't catch it, but it is still a signal. Here's an example of SIGTERM handling, I'm sure you've seen these before:
#include <stdio.h>
#include <signal.h>
#include <unistd.h>
#include <string.h>
void sighandler (int signum, siginfo_t *info, void *context) {
fprintf (
stderr,
"Received %d from pid %u, uid %u.\n",
info->si_signo,
info->si_pid,
info->si_uid
);
}
int main (void) {
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = sighandler;
sa.sa_flags = SA_SIGINFO;
sigaction(SIGTERM, &sa, NULL);
while (1) sleep(10);
return 0;
}
This process will just sleep forever. You can run it in a terminal and send it SIGTERM with kill
. It spits out stuff like:
Received 15 from pid 25331, uid 1066.
1066 is my UID. The PID will be that of the shell from which kill
is executed, or the PID of kill if you fork it (kill 25309 & echo $?
).
Again, there's no point in setting a handler for SIGKILL because it won't work.3 If I kill -9 25309
the process will terminate. But that's still a signal; the kernel has the information about who sent the signal, what kind of signal it is, etc.
1. If you haven't looked at the list of possible signals, see kill -l
.
2. Another exception, as Tim Post mentions below, applies to processes in uninterruptible sleep. These can't be woken up until the underlying issue is resolved, and so have ALL signals (including SIGKILL) deferred for the duration. A process can't create that situation on purpose, however.
3. This doesn't mean using kill -9
is a better thing to do in practice. My example handler is a bad one in the sense that it doesn't lead to exit()
. The real purpose of a SIGTERM handler is to give the process a chance to do things like clean up temporary files, then exit voluntarily. If you use kill -9
, it doesn't get this chance, so only do that if the "exit voluntarily" part seems to have failed.
I already answered a similar question a few months ago. So see that first for technical details. Here, I shall just show you how your situation is covered by that answer.
As I explained, I and other writers of various dæmon supervision utilities take advantage of how Linux now works, and what you are seeing is that very thing in action, almost exactly as I laid it out.
The only missing piece of information is that init --user
is your session instance of upstart. It is started up when you first log in to a session, and stopped when you log out. It's there for you to have per-session jobs (similar, but not identical, to MacOS 10's user agents under launchd
) of your own.
A couple of years ago, the Ubuntu people went about converting graphical desktop systems to employ upstart per-session jobs. Your GNOME Terminal is being started as a per-session job, and any orphaned children are inherited by the nearest sub-reaper, which is of course your per-session instance of upstart.
The systemd people have been, in recent months, working on the exact same thing, setting up GNOME Terminal to run individual tabs as separate systemd services, from one's per-user instance of systemd. (You can tell that your question is about upstart, not systemd, because on a systemd system the sub-reaper process would be systemd --user
.)
How can I execute a new process from GNOME Terminal so that the child process's parent PID becomes 1 and not the PID of the ubuntu session init process?
This is intentionally hard. Service managers want to keep track of orphaned child processes. They want not to lose them to process #1. So the quick précis is: Stop trying to do that.
If you are asking solely because you think that your process ought to have a parent process ID of 1, then wean yourself off this idea.
If you erroneously think that this is an aspect of being a dæmon, then note that dæmons having parent process IDs of 1 has not been guaranteed (and on some Unices, not true across the whole system) since the advent of things like IBM's System Resource Controller and Bernstein's daemontools in the 1990s. In any case, one doesn't get to be a dæmon by double-forking within a login session. That's a long-since known to be half-baked idea.
If you erroneously think that this is a truism for orphaned child processes, then read my previous answer again. The absolutism that orphaned children are re-parented to process #1 is wrong, and has been wrong for over three years, at the time of writing this.
If you have a child process that for some bizarre reason truly needs this, then find out what that bizarre reason is and get it fixed. It's probably a bug, or someone making invalid design assumptions. Whatever the reason, the world of dæmon management changed in the 1990s, and Linux also changed some several years ago. It is time to catch up.
Further reading
Best Answer
If
some-boring-process
is running in your current bash session:ctrl-z
to give you the bash promptbg
jobs
commanddisown -h %1
(substitute the actual job number there).That doesn't do anything to redirect the output -- you have to think of that when you launch your boring process. [Edit] There seems to be a way to redirect it https://gist.github.com/782263
But seriously, look into screen. I have shells on a remote server that have been running for months.
Looks like this: