Debian – How to curl full web page content

curldebianlinuxUbuntu

I want to download the full HTML source code of a X web page, however curling the X link will return partial HTML source code as that web page requires scrolling to see further content. It seems that curl doesn't go past "scroll down".

So far, I can only do this manually:
1) Go to the desired website
2) Execute the following command in browser's console to auto-scroll (load every object):

var scroll = setInterval(function(){ window.scrollBy(0,1000); }, 2000);

3) Copy the full HTML source code from inspect element

So the question is, how can I run curl command so it scrapes full web page content (scrolls until it loads all objects) before outputing at terminal to achieve same result as the abovementioned steps? If not with curl, maybe wget?

Best Answer

curl isn't a full-fledged browser and to the best of my knowledge does not support executing JavaScript. It uses HTTP/FTP to fetch files; that is all. If you want to do testing of functionality which depends upon scripting or other tooling which a bare HTTP request is incapable of touching upon, you will need to look into a more in-depth test suite such as Selenium.

Related Question