How to quote special characters (portably)
The following snippet adds a backslash before each character that's special in extended regular expressions, using sed
to replace any occurence of one of the characters ][()\.^$?*+
by a backslash followed by that character:
raw_string='test[string]\.wibble'
quoted_string=$(printf %s "$raw_string" | sed 's/[][()\.^$?*+]/\\&/g')
This will remove trailing newlines in $raw_string
; if that's a problem, ensure that the string doesn't end with a newline by adding an inert character at the end, then strip off that character.
quoted_string=$(printf %sa "$raw_string" | sed 's/[][()\.^$?*+]/\\&/g')
quoted_string=${quoted_string%?}
How to quote special characters (in bash or zsh)
Bash and zsh have a pattern replacement feature, which can be faster if the string is not very long. It's cumbersome here because the replacement must be a string, so each character needs to be replaced separately. Note that you must escape the backslashes first.
quoted_string=${raw_string//\\//\\\\}
for c in \[ \] \( \) \. \^ \$ \? \* \+; do
quoted_string=${quoted_string//"$c"/"\\$c"}
done
How to quote special characters (in ksh93)
Ksh's string replacement construct is more powerful than the watered-down version in bash and zsh. It supports references to groups in the pattern.
quoted_string=${raw_string//@([][()\.^$?*+])/\\\1}
What you actually want
You don't need find
here: shell patterns are sufficient to match files ending with three digits. If no part file exists, the glob pattern is left unexpanded. There's also a simpler way of adding the file sizes: rather than use stat
(which exists on many unix variants but has a different syntax on each) and do complex pipelining to sum the values, you can call wc -c
(on regular files, on most systems, wc
will look at the file size and not bother to open the file and read the bytes).
set -- "$DESTINATION/$FILE_BASENAME".[0-9][0-9][0-9]
case $1 in
*\]) # The glob was left intact, so no part exists
do_split …;;
*) # The glob was expanded, so at least one part exists
FILE_SIZE_EXISTING=$(wc -c "$@" | sed -n '$s/[^0-9]//gp')
if [ "$FILE_SIZE_EXISTING" -ne "$(wc -c <"$DESTINATION/$FILE_BASENAME")" ]; then
do_split …
fi
Note that your test on the total size is not very reliable: if the file has changed but remained the same size, you'll end up with stale parts. That's ok if the files never change and the only risk is that parts may be truncated or missing.
I end up with something like this:
ssid=$(iwlist wlan0 scanning |
awk -F: '
BEGIN{ printf "zenity --list --text \"Available Networks\" --list --column ESSID --column Secure --column Signal "; }
/Quality/{ split($0,x,"="); Quality = int(x[2]*100/70+.5); }
/Encryption/{ Encryption = $2; }
/ESSID/{ ESSID = $2;
printf "%s \"%s\" \"%s%%\" ", ESSID, Encryption, Quality
}' |
sh)
Doesn't really use grep, but it does what you want.
Best Answer
You can use
-
as the "file" to search, which will use standard input as the "haystack" to search for matching "needles" in:Use Ctrl-D to send
EOF
and end the stream.I don't believe, though, that you can do the same to use standard input for the
-f
switch which reads a list of patterns from a file. However, if you have a lot of patterns to text on one corpus, you can:where
needle-patterns
is a plaintext file with one regular expression per line.