It's the third line (print $sessionuser
) that causes that error, not the second.
print
is a builtin command to output text in ksh
and zsh
, but not bash
. In bash
, you need to use printf
or echo
instead.
Also note that in bash
(contrary to zsh
, but like ksh
), you need to quote your variables.
So zsh
's:
print $sessionuser
(though I suspect you meant:
print -r -- $sessionuser
If the intent was to write to stdout the content of that variable followed by a newline) would be in bash
:
printf '%s\n' "$sessionuser"
(also works in zsh
/ksh
).
Some systems also have a print
executable command in the file system that is used to send something to a printer, and that's the one you're actually calling here. Proof that it is rarely used is that your implementation (same as mine, as part of Debian's mime-support package) has not been updated after perl
's upgrade to work around the fact that perl
now warns you about those improper uses of {
in regular expressions and nobody noticed.
{
is a regexp operator (for things like x{min,max}
). Here in %{(.*?)}
, that (.*?)
is not a min,max
, still perl
is lenient about that and treats those {
literally instead of failing with a regexp parsing error. It used to be silent about that, but it now reports a warning to tell you you probably have a problem in your (here print
's) code: either you intended to use the {
operator, but then you have a mistake within. Or you didn't and then you need to escape those {
.
BTW, you can simply use:
sessionuser=$(logname)
to get the name of the user that started the login session that script is part of. That uses the getlogin()
standard POSIX function. On GNU systems, that queries utmp
and generally only works for tty login sessions (as long as something like login
or the terminal emulator registers the tty with utmp
).
Or:
sessionuser=$(id -un)
To get the name of one user that has the same uid as the effective user id of the process running id
(same as the one running that script).
It's equivalent to your ps -p "$$"
approach because the shell invocation that would execute id
would be the same as the one that expands $$
and apart from zsh
(via assignment to the EUID
/UID
/USERNAME
special variables), shells can't change their uids without executing a different command (and of course, of all commands, id
would not be setuid).
Both id
and logname
are standard (POSIX) commands (note that on Solaris, for id
like for many other commands you'd need to make sure you place yourself in a POSIX environment to make sure you call the id
command in /usr/xpg4/bin
and not the ancient one in /bin
. The only purpose of using ps
in the answer you linked to is to work around that limitation of /bin/id
on Solaris).
If you want to know the user that called sudo
, it's via the $SUDO_USER
environment variable. That's a username derived by sudo
from the real user id of the process that executed sudo
. sudo
later changes that real user id to that of the target user (root
by default) so that $SUDO_USER
variable is the only way to know which it was.
Note that when you do:
sudo ps -fp "$$"
That $$
is expanded by the shell that invokes sudo
to the pid of the process that executed that shell, not the pid of sudo
or ps
, so it will give not give you root
here.
sudo sh -c 'ps -fp "$$"'
Would give you the process that executed that sh
(running as root
) which is now either still running sh
or possibly ps
for sh
invocations that don't fork an extra process for the last command.
That would be the same for a script that does that same ps -p "$$"
and that you run as sudo that-script
.
Note that in any case, neither bash
nor sudo
are POSIX commands. And there are many systems where neither are found.
Start a shell to use shell parameter expansion operators:
find ~/tmp -name '*.log' -type f -exec sh -c '
for file do
mv -i -- "$file" "${file%.*}"
done' sh {} +
Note that you don't want to do that on /tmp
or any directory writable by others as that would allow malicious users to make you rename arbitrary .log
files on the file system¹ (or move files into any directory²).
With some find
and mv
implementations, you can use find -execdir
and mv -T
to make it safer:
find /tmp -name '*.log' -type f -execdir sh -c '
for file do
mv -Ti -- "$file" "${file%.*}"
done' sh {} +
Or use rename
(the perl variant) that would just do a rename()
system call so not attempt to move files to other filesystems or into directories...
find /tmp -name '*.log' -type f -execdir rename 's/\.log$//' {} +
Or do the whole thing in perl
:
perl -MFile::Find -le '
find(
sub {
if (/\.log\z/) {
$old = $_;
s/\.log\z//;
rename($old, $_) or warn "rename $old->$_: $!\n"
}
}, @ARGV)' ~/tmp
But note that perl
's Find::File
(contrary to GNU find
) doesn't do a safe directory traversal³, so that's not something you would like to do on /tmp
either.
Notes.
¹ an attacker can create a /tmp/. /auth.log
file, and in between find
finding it and mv
moving it (and that window can easily be made arbitrarily large) replace the "/tmp/. "
directory with a symlink to /var/log
resulting in /var/log/auth.log
being renamed to /var/log/auth
² A lot worse, an attacker can create a /tmp/foo.log
malicious crontab
for example and /tmp/foo
a symlink to /etc/cron.d
and make you move that crontab into /etc/cron.d
. That's the ambiguity with mv
(also applies to cp
and ln
at least) that can be both move to and move into. GNU mv
fixes it with its -t
(into) and -T
(to) options.
³ File::Find
traverses the directory by doing chdir("/tmp"); read content; chdir("foo") ...; chdir("bar"); chdir("../..")...
. So someone can create a /tmp/foo/bar
directory and at the right moment, rename it to /tmp/bar
so the chdir("../..")
would land you in /
.
Best Answer
Your quoting problem is coming from you trying to solve a problem you don't have. Needing to quote arguments only comes into play when you're dealing with a shell, and if
find
is callingrsync
directly, there is no shell involved. Using visual output isn't a good way to tell if it works or not because you can't see where each argument begins and ends.Here's what I mean:
Notice that I didn't quote the
{}
in the arg tostat
.Now that said, your command is going to be very non-performant, because you're calling
rsync
for every single matching file. There are 2 ways you can solve this.As others have indicated you can use pipe the file list to
rsync
on stdin:This will use null bytes as the file name delimiter since files can't contain null bytes in their name.
If you're using GNU
find
, you have another method of invoking-exec
, and that's-exec {} +
. In this stylefind
will pass more than one argument at a time. However all the arguments are added to the end of the command, not in the middle. You can address this by passing the arguments through a small shell:This will pass the list of file to the
sh
which will then substitute them in for the"$@"