Ubuntu – How to create a CLI Web Spider that uses keywords and filters content

command linecurlscriptswget

I want to find my articles within the deprecated (obsolete) literature forum e-bane.net. Some of the forum modules are disabled, and I can't get a list of articles by their author. Also the site is not indexed by the search engines as Google, Yndex, etc.

The only way to find all of my articles is to open the archive page of the site (fig.1). Then I must select certain year and month – e.g. January 2013 (fig.1). And then I must inspect each article (fig.2) whether in the beginning is written my nickname – pa4080 (fig.3). But there are few thousand articles.

enter image description here

enter image description here

enter image description here

I've read few topics as follow, but none of the solutions fits to my needs:

I will post my own solution. But for me is interesting:
Is there any more elegant way to solve this task?

Best Answer

script.py:

#!/usr/bin/python3
from urllib.parse import urljoin
import json

import bs4
import click
import aiohttp
import asyncio
import async_timeout


BASE_URL = 'http://e-bane.net'


async def fetch(session, url):
    try:
        with async_timeout.timeout(20):
            async with session.get(url) as response:
                return await response.text()
    except asyncio.TimeoutError as e:
        print('[{}]{}'.format('timeout error', url))
        with async_timeout.timeout(20):
            async with session.get(url) as response:
                return await response.text()


async def get_result(user):
    target_url = 'http://e-bane.net/modules.php?name=Stories_Archive'
    res = []
    async with aiohttp.ClientSession() as session:
        html = await fetch(session, target_url)
        html_soup = bs4.BeautifulSoup(html, 'html.parser')
        date_module_links = parse_date_module_links(html_soup)
        for dm_link in date_module_links:
            html = await fetch(session, dm_link)
            html_soup = bs4.BeautifulSoup(html, 'html.parser')
            thread_links = parse_thread_links(html_soup)
            print('[{}]{}'.format(len(thread_links), dm_link))
            for t_link in thread_links:
                thread_html = await fetch(session, t_link)
                t_html_soup = bs4.BeautifulSoup(thread_html, 'html.parser')
                if is_article_match(t_html_soup, user):
                    print('[v]{}'.format(t_link))
                    # to get main article, uncomment below code
                    # res.append(get_main_article(t_html_soup))
                    # code below is used to get thread link
                    res.append(t_link)
                else:
                    print('[x]{}'.format(t_link))

        return res


def parse_date_module_links(page):
    a_tags = page.select('ul li a')
    hrefs = a_tags = [x.get('href') for x in a_tags]
    return [urljoin(BASE_URL, x) for x in hrefs]


def parse_thread_links(page):
    a_tags = page.select('table table  tr  td > a')
    hrefs = a_tags = [x.get('href') for x in a_tags]
    # filter href with 'file=article'
    valid_hrefs = [x for x in hrefs if 'file=article' in x]
    return [urljoin(BASE_URL, x) for x in valid_hrefs]


def is_article_match(page, user):
    main_article = get_main_article(page)
    return main_article.text.startswith(user)


def get_main_article(page):
    td_tags = page.select('table table td.row1')
    td_tag = td_tags[4]
    return td_tag


@click.command()
@click.argument('user')
@click.option('--output-filename', default='out.json', help='Output filename.')
def main(user, output_filename):
    loop = asyncio.get_event_loop()
    res = loop.run_until_complete(get_result(user))
    # if you want to return main article, convert html soup into text
    # text_res = [x.text for x in res]
    # else just put res on text_res
    text_res = res
    with open(output_filename, 'w') as f:
        json.dump(text_res, f)


if __name__ == '__main__':
    main()

requirement.txt:

aiohttp>=2.3.7
beautifulsoup4>=4.6.0
click>=6.7

Here is python3 version of the script (tested on python3.5 on Ubuntu 17.10).

How to use:

  • To use it put both code in files. As example the code file is script.py and package file is requirement.txt.
  • Run pip install -r requirement.txt.
  • Run the script as example python3 script.py pa4080

It uses several libraryes:

Things to know to develop the program further (other than the doc of required package):

  • python library: asyncio, json and urllib.parse
  • css selectors (mdn web docs), also some html. see also how to use css selector on your browser such as this article

How it works:

  • First I create a simple html downloader. It is modified version from the sample given on aiohttp doc.
  • After that creating simple command line parser which accept username and output filename.
  • Create a parser for thread links and main article. Using pdb and simple url manipulation should do the job.
  • Combine the function and put the main article on json, so other program can process it later.

Some idea so it can be developed further

  • Create another subcommand that accept date module link: it can be done by separating the method to parse the date module to its own function and combine it with new subcommand.
  • Caching the date module link: create cache json file after getting threads link. so the program don't have to parse the link again. or even just cache the entire thread main article even if it doesn't match

This is not the most elegant answer, but I think it is better than using bash answer.

  • It use Python, which mean it can be used cross platform.
  • Simple installation, all required package can be installed using pip
  • It can be developed further, more readable the program, easier it can be developed.
  • It does the same job as the bash script only for 13 minutes.