wget – Fix 403 Forbidden Error When Downloading from OneDrive for Business

curlwget

I'm trying to download software for a client which resides in my bussiness OneDrive account. Through the webinterface of OneDrive I do:

  • Right click the file and select Share
  • List item
  • Change the link settings to"Anyone with the link"
  • Keep the checkbox at "Allow editing"
  • Copy link

This gives me a link like this:

https://company-my.sharepoint.com/:u:/g/personal/path/lKuaRC_jkBwW9IJo4rOmN8tZju8mePVw?e=lRErX4

When I browse to that link, I'm redirected to a downloadpage where I see a button "Download" so I click Download, open the download center and select "copy download link" so I have the direct download link. When I copy that link to a new private window, I get the option to download the file directly.

When I use that link with curl or wget, I still get a 403:FORBIDDEN

For example, this is the command I use:

wget https://company-my.sharepoint.com/personal/path/to/the/file.aspx?SourceUrl=%2Fpersonal%2Fme%5Fcompany%5Fcountry%2FDocuments%2Fpath%2FSoftware%2FDownloadPackageLocationLinuxSP16%2Etar

and the output:

Resolving company-my.sharepoint.com (company-my.sharepoint.com)... aa.bbb.cc.d
Connecting to company-my.sharepoint.com (company-my.sharepoint.com)|aa.bbb.cc.d|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2020-01-24 14:46:48 ERROR 403: Forbidden.

Best Answer

Google Chrome as well as Mozilla Firerfox both provide an option to copy download link specifically for cURL. This option will generate cURL with all required things such as User agent for downloading things from the side. To get that,

  1. Open the URL in either of the browser.
  2. Open Developer options using Ctrl+Shift+I.
  3. Go to Network tab.
  4. Now click on download. Saving file isn't required. We only need the network activity while browser requests the file from the server.
  5. A new entry will appear which would look like "download.aspx?...".
  6. Right click on that and Copy → Copy as cURL.
  7. Paste the copied content directly in the terminal and append --output file.extension to save the content in file.extension since terminal isn't capable of showing binary data.

An example of the command:

curl 'https://company-my.sharepoint.com/personal/path/_layouts/15/download.aspx?SourceUrl=
%2Fpersonal%2Fsome%5Fpath%5Fin%2Ffile' -H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux 
x86_64; rv:73.0) Gecko/20100101 Firefox/73.0' -H 'Accept: text/html,application/xhtml+xml,
application/xml;q=0.9,image/webp,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' 
--compressed -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Referer: https://company-my
.sharepoint.com/personal/path/_layouts/15/onedrive.aspx?id=%2Fpersonal%2Fagain%5Fa%5Fpath%2F
file&parent=%2Fpersonal%2Fpath%5Fagain%5Fin%2&originalPath=somegibberishpath' -H 
'Cookie: MicrosoftApplicationsTelemetryDeviceId=someid; 
MicrosoftApplicationsTelemetryFirstLaunchTime=somevalue; 
rtFa=rootFederationAuthenticationCookie; FedAuth=againACookie; CCSInfo=gibberishText; 
FeatureOverrides_enableFeatures=; FeatureOverrides_disableFeatures=' -H 
'Upgrade-Insecure-Requests: 1' -H 'If-None-Match: "{some value},2"' -H 'TE: Trailers' 
--output file.extension

Further Reading: why would curl and wget result in a 403 forbidden?

Related Question