Bash – Replacing shell script while running it

bashbufferlinuxshell

I have a board that is running a "patch" script. The patch script always runs in the background and it is a shell script that runs the following pseudo code:

while true; do
    # checks if a patch tar file exists and if yes then do patching
    sleep 10
done

This script is at /opt/patch.sh and it started by SystemV init script.

The problems is that when the script finds the tar, it extracts it, and inside there is a shell script called patch.sh which is specific for the contents of the tar.

When the script at /opt/patch.sh finds the tar it does the following:

tar -xf /opt/update.tar -C /mnt/update
mv /mnt/update/patch.sh /opt/patch.sh
exec /opt/patch.sh

It replaces itself with the another script and executes it from the same location.
Can any problems occur doing that?

Best Answer

If the file is replaced by being written over in-place (inode stays the same), any processes having it open would see the new data if/when they read from the file. If it's replaced by unlinking the old file and creating a new one with the same name, the inode number changes, and any processes holding the file open would still have the old file.

mv might do either, depending on if the move happens between filesystems or not... To make sure you get a completely new file, unlink or rename the original first. Something like this:

mv /opt/patch.sh /opt/patch.sh.old     # or rm
mv /mnt/update/patch.sh /opt/patch.sh

That way, the running shell would still have a file handle to the old data, even after the move.


That said, as far as I've tested, Bash reads the whole loop before executing any of it, so any changes to the underlying file would not take change the running script as long as the execution stays within the loop. After exiting the loop, Bash resumes reading the input file from the position right after the loop ended. I'm not sure if reading the full loop is just so that it can check the syntax (it might be missing the final done), or if it's for caching purposes.

Any functions defined in the script are also loaded to memory, so putting the main logic of the script to a function, and only calling it at the end would make the script quite safe against modifications to the file:

#!/bin/sh
main() {
    do_stuff
    exit
}
main

Anyway, it's not too hard to test what happens when a script is overwritten:

$ cat > old.sh <<'EOF'
#!/bin/bash
for i in 1 2 3 4 ; do
        # rm old.sh
        cat new.sh > old.sh 
        sleep 1
        echo $i
done
echo will this be reached?
EOF
$ cat > new.sh <<'EOF'
#!/bin/bash
echo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
echo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
echo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
echo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
EOF
$ bash old.sh

With the rm old.sh commented out, the script will be changed in-place. Without the comment, a new file will be created.

Related Question