Different environment
Cron passes a minimal set of environment variables to your jobs. To see the difference, add a dummy job like this:
* * * * * env > /tmp/env.output
Wait for /tmp/env.output
to be created, then remove the job again. Now compare the contents of /tmp/env.output
with the output of env
run in your regular terminal.
A common "gotcha" here is the PATH
environment variable being different. Maybe your cron script uses the command somecommand
found in /opt/someApp/bin
, which you've added to PATH
in /etc/environment
? cron ignores PATH
from that file, so runnning somecommand
from your script will fail when run with cron, but work when run in a terminal. It's worth noting that variables from /etc/environment
will be passed on to cron jobs, just not the variables cron specifically sets itself, such as PATH
.
To get around that, just set your own PATH
variable at the top of the script. E.g.
#!/bin/bash
PATH=/opt/someApp/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# rest of script follows
Some prefer to just use absolute paths to all the commands instead. I recommend against that. Consider what happens if you want to run your script on a different system, and on that system, the command is in /opt/someAppv2.2/bin
instead. You'd have to go through the whole script replacing /opt/someApp/bin
with /opt/someAppv2.2/bin
instead of just doing a small edit on the first line of the script.
You can also set the PATH variable in the crontab file, which will apply to all cron jobs. E.g.
PATH=/opt/someApp/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
15 1 * * * backupscript --incremental /home /root
Sometimes running a process from root's crontab
may cause issues with initial file ownership and rwx
mode; those may not be correctly preserved.
In any case:
1) to create a new user, keep it simple:
$ sudo deluser my-user # if "my-user" is a regular user
$ adduser my-user
$ sudo gpasswd -a my-user sudo
2) to include a new entry with a NOPASSWD
tag in sudoers or in a file (e.g. /etc/sudoers.d/60_my-user_rules
), make the colon stick to the tag, i.e. NOPASSWD:
I've not seen it before with interspersed space and yr rule becomes:
my-user my-host = NOPASSWD: /full/path/to/cmd [parameter1 [| parameter2 [| ...]]]
Adding (ALL)
before the NOPASSWD:
is optional as the rule defaults to (ALL:ALL)
anyway. You may however want to not only run your cmd/script with root privilege but also run it as either a given user (spec-user
) or as a member of a given group (spec-group
) or both. In that case, the rule becomes:
my-user my-host = ([spec-user][:spec-group]) NOPASSWD: /full/path/to/cmd [parameter1 [| parameter2 [| ...]]]
This will actually restrict yr passwordless sudo disposition to one user, one host and one command. You can harden this rule by specifying the optional parameter(s) to that command. In that case the rule will apply only for that/those exact parameter(s).
For scripts, you could further harden this rule by ensuring that the rule applies only if the script was not modified in any way. This is a way to avoid script-hijacking. This is done through cmd-aliasing and specifying SHA-sums in /etc/sudoers.d/60_my-user_rules
.
HTH. Please report if you experience issues with that answer.
Best Answer
The modern option is to use a
systemd
timer unit. This requires creating asystemd
unit which defines the job you want to periodically run, and asystemd.timer
unit defining the schedule for the job.Assuming you want to run the job as regular user, put these files in
$HOME/.config/systemd/user
:my-job.service
my-job.timer
Then enable the newly created units, and start the timer:
To verify that the timer is set:
journalctl -xe
should show log entries of the job being run.Refer to
man systemd.timer
for the many options for configuring timer behaviour (including randomised starting, waking the computer, persistence across downtime, timer accuracy, etc.), and toman systemd.unit
for excellent documentation onsystemd
andsystemd
units in general.