I think I've found the reason for this.
When a web server encounters an error, it normally serves a document (usually an HTML document describing the error) to the browser, indicating the error condition using the HTTP status code.
According to this bug report, Firefox originally always displayed the returned document; normally this is what you want. However, a user found an issue with a misconfigured AOL server: when requesting a nonexistent EXE file, the server would serve the 404 page, but with an incorrect Content-Type. That caused Firefox to offer to download the HTML document with a .exe
extension, which was confusing since there was no indication that any error occurred. They changed the behavior with a simple hack (not warranting the effort of writing a new error message page, since it's an uncommon case, instead reusing the "not found" page, which makes sense in the specific example given by the bug's reporter).
From the bug report that @m4573r found, it sounds like the current behavior when Firefox receives a response with an HTTP code signaling an error, and the response's Content-Type
is something other than HTML, then Firefox displays a "File not found" error page.
The vast majority of web servers are configured to serve an HTML document on error, which is why you don't normally see this. But in this corner case, the error message doesn't make sense.
wget -d http://www.ssa.gov/framework/images/icons/png/
confirms what's going on here:
---response begin---
HTTP/1.1 500 Server Error
Server: Generic Web Server 1.0
Date: Thu, 20 Feb 2014 23:29:57 GMT
Cache-control: public
Content-type: magnus-internal/directory
Transfer-encoding: chunked
---response end---
It's serving the error page with the bogus Content-type
of magnus-internal/directory
, triggering the behavior in Firefox.
Evidently Google thought that this behavior made sense and implemented it similarly in Chromium.
Sounds system based. You likely have poorly written spyware. See this SuperUser article for recommendations
Your host file may be modified:
Check your host file:
- Press + R
- Type: %systemroot%\System32\drivers\etc press ok (you may need to tell it to show system files)
- Double click on hosts and open with Notepad
- The only line that should not have a # before it should be
127.0.0.1 localhost
(if this is a work computer, check with your IT staff before modifying this file).
- Erase anything that is not that and save.
If you are behind a admin controlled network you may not be getting a good address for Google.
- Press + R
- Type cmd press ok.
- In the black command window type: nslookup google.com
it should look something like the following (not exactly the same numbers though)
Name: google.com
Addresses: 2607:f8b0:4009:804::1006
173.194.46.36
173.194.46.37
173.194.46.46
173.194.46.34
173.194.46.38
173.194.46.35
173.194.46.33
173.194.46.32
173.194.46.41
173.194.46.40
173.194.46.39
Edit due to comment:
You certainly have malware. Your DNS is hijacked.
Follow the recommendations in the article I've linked above, I recommend Malwarebytes Free.
You may need to free your DNS see this Microsoft article to check your DNS. You want it set to automatically obtain. If it is already set to automatically obtain, you may find it difficult to download anti-malware programs, in which case you should obtain them from another computer and portable media(thumb drive probably).
Best Answer
There is something wrong with the way that website and/or web sever is delivering web pages.
This is not a problem on your end but rather something odd on the server side; with either the website itself or the server delivering web content.
Instead of delivering content with headers that indicate
text/html
it is delivering content asapplication/octet-stream
which a web browser will interpret as being binary data that should then be handled as a file download. Look at the output of this Curl command:I’m pretty sure you can’t fix that on the client side unless there is some kind of plugin — for Firefox or Chrome for example — that will allow you to force specific headers for a we request like this.
That said, the core content of that site is available elsewhere on the Internet on sites that are properly configured and working as expected.
Past that, if you simply want to read the content on that site — rather than debug the issue on that site itself — you can access the same content at the following other sites as per dave_thompson_085’s comment to the question:
http://www.lispworks.com/documentation/lw50/CLHS/Front/index.htm
http://www.ai.mit.edu/projects/iiip/doc/CommonLISP/HyperSpec/FrontMatter/
And as per Dave’s comment, that second MIT link should be authoritative since, “IMHO the proper home; I knew Kent at the time he worked there.”