If you want the archive to extract into its own directory -- which is generally better, since ones that don't can make a mess -- just create the directory, then move/copy the content tree into it, so you have, as in your second example:
mycustomfolder/file1
mycustomfolder/folder2/hello
mycustomfolder/folder2/world
mycustomfolder/file3
Then tar -cvf myarchive.tar mycustomfolder
. To extract, tar -xvf myarchive.tar
.
If you don't want to create the directory first, you can transform the files names and append a directory prefix:
tar --xform="s%^%mycustomfolder/%" -cvf myarchive.tar file1 folder2 file3
The transformation (see man tar
) uses sed
syntax; I used %
instead of /
for the divider because s/^/mycustomerfolder\//
creates a folder named mycustomfolder\
(== odd behavior IMO), but s/^/mycustomfolder//
is (properly) an "Invalid transform expression".
You don't need any of the GNUisms here (and you probably want a -mindepth 1
to exclude .
), and you don't need to run one chmod
per file:
find . ! -name . -prune ! -type l -size +100c -size -1000c -print \
-exec chmod a+r {} + >testfile
(I've also added a ! -type l
because -size
would check the size of the symlink while chmod
will change the permissions of the target of the symlink so it doesn't make sense to consider symlinks. Chances are you'd want to go further and only consider regular files (-type f
))
That works here because chmod
doesn't output anything on its stdout (which otherwise would end-up in testfile).
More generally, to avoid that, you'd need to do:
find . ! -name . -prune ! -type l -size +100c -size -1000c -print -exec sh -c '
exec cmd-that-may-write-to-stdout "$@" >&3 3>&-' sh {} + 3>&1 > testfile
So that find
's stdout goes to testfile
but cmd-that-may-write-to-stdout
's stdout goes to the original stdout before redirection (as saved with 3>&1
above).
Note that in your:
find . -maxdepth 1 -size +100c -size -1000c -exec chmod a+r {} \; -print > testfile
testfile
would contain the files for which chmod
has succeeded (the -print
being after -exec
means -exec
is another condition for that -print
, and -exec
succeeds if the executed command returns with a non-zero exit status).
If you wanted to use xargs
(here using GNU syntax), you could use tee
and process substitution:
find . ! -name . -prune ! -type l -size +100c -size -1000c -print0 |
tee >(tr '\0' '\n' > testfile) |
xargs -r0 chmod a+r
To save the output of find
with NULs turned into newlines into testfile
. Note however that that tr
command is running in background. Your shell will wait for xargs
(at least, most shells will also wait for tee
and find
), but not for tr
. So there's a little chance that tr
has finished writing data to testfile
by the time the shell runs the next command. If it's more important that the testfile
be fully written by then than all the permissions be modified, you may want to swap the xargs
and tr
commands above.
Another options is to wrap the whole code above in:
(<that-code>) 3>&1 | cat
That way, the shell will wait for cat
and that cat
will only exit when all the processes that have that file descriptor 3 open on the writing end of the pipe it reads (which includes tr
, find
, tee
, xargs
) have exited.
Another option is to use zsh
globs here:
files=(./*(L+100L-1000^@))
chmod a+r $files
print -rl $files > testfile
Though you could run into a too many arguments errors if the list of files is very big. find -exec +
and xargs
work around that by running several chmod
commands if needed. You can use zargs
in zsh
for that.
Best Answer
ls -t | head -n 3 | xargs tar -cf t.tar
Works for me. Is there a reason you need the
-I
flag set?