Introduction

In this article, we are going to see how to scrape information from a website, in particular, from all pages with a common URL pattern. We will see how to do that with Scrapy, a very powerful, and yet simple, scraping and web-crawling framework.

For example, you might be interested in scraping information about each article of a blog, and store it information in a database. To achieve such a thing, we will see how to implement a simple spider using Scrapy, which will crawl the blog and store the extracted data into a MongoDB database.

We will consider that you have a working MongoDB server, and that you have installed thepymongoandscrapypython packages, both installable with pip.

If you have never toyed around with Scrapy, you should first read this short tutorial.

First step, identify the URL pattern(s)

In this example, we’ll see how to extract the following information from each isbullsh.it blogpost :

  • title
  • author
  • tag
  • release date
  • url

We’re lucky, all posts have the same URL pattern:http://isbullsh.it/YYYY/MM/title. These links can be found in the different pages of the site homepage.

What we need is a spider which will follow all links following this pattern, scrape the required information from the target webpage, validate the data integrity, and populate a MongoDB collection.

Building the spider

We create a Scrapy project, following the instructions from their tutorial. We obtain the following project structure:

 1isbullshit_scraping/
 2├── isbullshit
 3│   ├── __init__.py
 4│   ├── items.py
 5│   ├── pipelines.py
 6│   ├── settings.py
 7│   └── spiders
 8│       ├── __init__.py
 9│       ├── isbullshit_spiders.py
10└── scrapy.cfg

We begin by defining, initems.py, the item structure which will contain the extracted information:

1from scrapy.item import Item, Field
2class IsBullshitItem(Item):
3    title = Field()
4    author = Field()
5    tag = Field()
6    date = Field()
7    link = Field()

Now, let’s implement our spider, inisbullshit_spiders.py:

 1from scrapy.contrib.spiders import CrawlSpider, Rule
 2from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
 3from scrapy.selector import HtmlXPathSelector
 4from isbullshit.items import IsBullshitItem
 5class IsBullshitSpider(CrawlSpider):
 6    name = 'isbullshit'
 7    start_urls = ['http://isbullsh.it'] # urls from which the spider will start crawling
 8    rules = [Rule(SgmlLinkExtractor(allow=[r'page/\d+']), follow=True),
 9        # r'page/\d+' : regular expression for http://isbullsh.it/page/X URLs
10        Rule(SgmlLinkExtractor(allow=[r'\d{4}/\d{2}/\w+']), callback='parse_blogpost')]
11        # r'\d{4}/\d{2}/\w+' : regular expression for http://isbullsh.it/YYYY/MM/title URLs
12    def parse_blogpost(self, response):
13        ……

Our spider inherits fromCrawlSpider, which “provides a convenient mechanism for following links by defining a set of rules”. More info here.

We then define two simple rules:

  • Follow links pointing tohttp://isbullsh.it/page/X
  • Extract information from pages defined by a URL of patternhttp://isbullsh.it/YYYY/MM/title, using the callback methodparse_blogpost.

Extracting the data

To extract the title, author, etc, from the HTML code, we’ll use thescrapy.selector.HtmlXPathSelector object, which uses thelibxml2HTML parser. If you’re not familiar with this object, you should read theXPathSelectordocumentation.

We’ll now define the extraction logic in theparse_blogpostmethod (I’ll only define it for the title and tag(s), it’s pretty much always the same logic):

1def parse_blogpost(self, response):
2    hxs = HtmlXPathSelector(response)
3    item = IsBullshitItem()
4    # Extract title
5    item['title'] = hxs.select('//header/h1/text()').extract() # XPath selector for title
6    # Extract author
7    item['tag'] = hxs.select("//header/div[@class='post-data']/p/a/text()").extract() # Xpath selector for tag(s)
8    ...
9    return item

Note: To be sure of the XPath selectors you define, I’d advise you to use Firebug, Firefox Inspect, or equivalent, to inspect the HTML code of a page, and then test the selector in a Scrapy shell. That only works if the data position is coherent throughout all the pages you crawl.

Store the results in MongoDB

Each time that theparse_blogspotmethod returns an item, we want it to be sent to a pipeline which will validate the data, and store everything in our Mongo collection. First, we need to add a couple of things tosettings.py:

1ITEM_PIPELINES = ['isbullshit.pipelines.MongoDBPipeline',]
2MONGODB_SERVER = "localhost"
3MONGODB_PORT = 27017
4MONGODB_DB = "isbullshit"
5MONGODB_COLLECTION = "blogposts"

Now that we’ve defined our pipeline, our MongoDB database and collection, we’re just left with the pipeline implementation. We just want to be sure that we do not have any missing data (ex: a blogpost without a title, author, etc).

Here is ourpipelines.pyfile :

 1import pymongo
 2from scrapy.exceptions import DropItem
 3from scrapy.conf import settings
 4from scrapy import log
 5class MongoDBPipeline(object):
 6    def __init__(self):
 7        connection = pymongo.Connection(settings['MONGODB_SERVER'], settings['MONGODB_PORT'])
 8        db = connection[settings['MONGODB_DB']]
 9        self.collection = db[settings['MONGODB_COLLECTION']]
10    def process_item(self, item, spider):
11        valid = True
12        for data in item:
13          # here we only check if the data is not null
14          # but we could do any crazy validation we want
15          if not data:
16            valid = False
17            raise DropItem("Missing %s of blogpost from %s" %(data, item['url']))
18        if valid:
19          self.collection.insert(dict(item))
20          log.msg("Item wrote to MongoDB database %s/%s" %
21                  (settings['MONGODB_DB'], settings['MONGODB_COLLECTION']),
22                  level=log.DEBUG, spider=spider)
23        return item

Release the spider!

Now, all we have to do is change directory to the root of our project and execute

1$ scrapy crawl isbullshit

The spider will then follow all links pointing to a blogpost, retrieve the post title, author name, date, etc, validate the extracted data, and store all that in a MongoDB collection if validation went well.

Pretty neat, hm?

Conclusion

This case is pretty simplistic: all URLs have a similar pattern and all links are hard written in the HTML code: there is no JS involved. In the case were the links you want to reach are generated by JS, you’d probably want to check out Selenium. You could complexify the spider by adding new rules, or more complicated regular expressions, but I just wanted to demo how Scrapy worked, not getting into crazy regex explanations.

Also, be aware that sometimes, there’s a thin line bewteen playing with web-scraping and getting into trouble.

Finally, when toying with web-crawling, keep in mind that you might just flood the server with requests, which can sometimes get you IP-blocked 🙂 Please, don’t be a d*ick.

See code on Github

The entire code of this project is hosted on Github. Help yourselves!

原文来自于isbullsh 站点,这里摘来进行学习。