Ubuntu – Scan a file over the Internet by its url using ClamAV

clamavremote

Sometimes I have a need to scan the files over the Internet given their urls. I didn't find any information about this capacity at the documentation of ClamAV. Does it allow to do that? How do I get the result of scanning?

Best Answer

To scan a file you have to download it. You can only scan it on the server if it is your server and you have remote access to it. So if you want to download it from the terminal and run a scan as soon as it is downloaded, you should use a command like:

wget -O /tmp/filetoscan http://LINK_TO_THE_FILE && clamscan /tmp/filetoscan

or if you don't need the file after download, you can pipe it to clamscan:

 wget -qO- http://LINK_TO_THE_FILE | clamscan -

The above will still download the file, the only difference is that it won't store it on your storage. (So your bandwith will be still used during the process, and if you decide that the file is clean, you have to redownload it if you need it on your pc.)

If you don't want to download it, then you have to use a cloud service which downloads and scans it for you, e.g virustotal does it and it scans the file with ClamAV also. Virustotal also has a public API which you can use to initiate scans from your terminal or from any program you write.

If it is your server with clamAV installed on it and you have ssh access to it, then you just ssh to the server and scan the file just like you would scan it on your client. You can use somwthing like:

ssh YOUR_USER@`echo "example.com/file1" | awk -F "/" '{print $1}'` "clamscan /var/www`echo "example.com/file1" | awk -F ".com" '{print $2}'`"

Of course this depends on your server root (/var/www) and whether there are other settings on the server which should be taken into an account when reconstructing the filepath from the URL, but because you know how your server is set up you can do it.

Related Question