Ubuntu – How to recursively copy/download a whole webdav directory


When I attempt to copy a folder from a webdav server to a local disk using Nautilus, it copies what appeas to be a manifest file (xml with the directory listing etc..). With cadaver I get an empty file.

I would like to be able to recursively copy a whole directory tree. Does anyone know how I can do this?

ps: I'm using Ubuntu 11.04 with Nautilus and Cadaver 0.23.3

Best Answer

This answer summarises suggestions given in comments by @Ocaso and @Rinzwind.

I used this:

wget -r -nH -np --cut-dirs=1 --no-check-certificate -U Mozilla --user={uname} 
    --password={pwd} https://my-host/my-webdav-dir/my-dir-in-webdav

Not perfect (downloaded lots of 'index.html?C=M;O=D' and the like) but otherwise worked ok.

The "-r" downloads recursively, following links.

The "-np" prevents ascending to parent directories (else you download the whole website!).

The "-nH" prevents creating a directory called "my-host" (which I didn't want).

The "--cut-dirs=1" prevents creating a directory called "my-webdav-dir".

The "--no-check-certificate" is because I'm using a self-signed certificate on the webdav server (I'm also forcing https).

The "-U Mozilla" sets the user agent in the http request to "Mozilla" - my webdav server didn't actually need this, but I've included it anyway.

Related Question