Your commands are trying to put the new Document to the root (/
) of your machine. What you want to do is to transfer them to your home directory (since you have no permissions to write to /
). If path to your home is something like /home/erez
try the following:
scp My_file.txt user_id@server:/home/erez/
You can substitute the path to your home directory with the shortcut ~/
, so the following will have the same effect:
scp My_file.txt user_id@server:~/
You can even leave out the path altogether on the remote side; this means your home directory.
scp My_file.txt user_id@server:
That is, to copy the file to your desktop you might want to transfer it to /home/erez/Desktop/
:
scp My_file.txt user_id@server:/home/erez/Desktop/
or using the shortcut:
scp My_file.txt user_id@server:~/Desktop/
or using a relative path on the remote side, which is interpreted relative to your home directory:
scp My_file.txt user_id@server:Desktop/
Edit:
As @ckhan already mentioned, you also have to swap the arguments, it has to be
scp FROM TO
So if you want to copy the file My_file.txt
from the server user_id@server
to your desktop you should try the following:
scp user_id@server:/path/to/My_file.txt ~/Desktop/
If the file My_file.txt
is located in your home directory on the server you may again use the shortcut:
scp user_id@server:~/My_file.txt ~/Desktop/
There are a number of solutions.
Background & Disown the Process
- Open ssh terminal to remote server.
- Begin
scp
transfer as usual.
- Background the scp process (Ctrl+Z, then the command
bg
.)
- Disown the backgrounded process (
disown
).
- Terminate the session (
exit
) and the process will continue to run on the remote machine.
One disadvantage to this approach is that the file descriptors for stdout and stderr will still contain references to your ssh session's tty. The terminal may hang when you try to exit because of this. You can work around this by typing ~.
to force close your ssh client (that escape sequence must follow a new line...see also ~?
). If the process you are abandoning writes to stdout or stderr, the process may exit prematurely if the tty buffer overfills.
Create a Screen Session and Detach It
GNU Screen can be used to create a remote terminal session, which can be detached and continue to run on the server after you log out of the session. You can then log back into the server at a later date and reattach to the session.
- Log into the remote server over ssh.
- Start a screen session,
screen -D -R <session_name>
.
- Begin
scp
transfer as usual.
- Detach the screen session with Ctrl+A then d.
- Terminate the ssh session (
exit
)
To reattach to the session:
- Log into the remote server over ssh.
- Reattach to the screen session,
screen -D -R <session_name>
Run the Command without Hangups
See the answer using nohup
.
Use a Task Scheduler
This is the best solution if this is a periodic sort of task that you want to automate.
Use crontab
, at
, or batch
to schedule the transfer.
Best Answer
scp transfers files over SSH, which does cryptographic authenticity & integrity checking. That basically rules out the bad WLAN possibility.
Bad memory is reasonably likely. Note that bad memory often starts bad, it's not typically from getting old. Installing and running memtest86/metest86+ will either confirm this or mostly rule it out. (For ruling it out, you want to leave the test running for a while, at least overnight). If it finds an error, you don't need to keep it running, you can stop immediately and proceed to replacing DIMMs.
The disk corrupting it is also possible. Similarly, you could have bad cabling to the disk, or a defective controller, etc.
Other possibilities are filesystem bugs (unlikely if you're using something common like ext4), malware (thankfully fairly uncommon on Linux), but this is most likely a hardware problem.