Quantcast
Channel: From Accessibility to Zope » Python
Viewing all articles
Browse latest Browse all 5

Image spidering in Python

$
0
0

I have several useful tools in Python for working with websites. Today I needed a script to report the images on a website, along with their corresponding alt tags. The script was extremely quick to write using the available tools, which makes it a fairly good example of how powerful Python is.

I have based this script on a pre-existing webspider class I have written:

class Spider(object):
   def __init__(self, base_url):
      self.base_url = base_url
   
   def pages(self):
      queue = [self.base_url]
      seen = set(queue)
   
      while queue:
         url = queue.pop(0)
         f = urllib2.urlopen(url)
         if f.info().gettype() not in ['text/html', 'application/xhtml+xml']:
            continue
         doc = ElementSoup.parse(f)
         doc.make_links_absolute(url)
         for element, attribute, link, pos in doc.iterlinks():
            if not link.startswith(self.base_url):
               continue
            if element.tag == 'a' and attribute == 'href':
               l = re.sub(r'#.*$', '', link)
               if l not in seen:
                  queue.append(l)
                  seen.add(l)
   
         path = url[len(self.base_url):]
         yield path, doc

This class effectively wraps a generator which yields every pair of path and web page it finds on the site. Generators are incredibly useful for keeping code simple without being memory hungry. It's easier to type yield than building a list of items, but in this case it's better than that: this code returns one LXML ElementTree at a time, rather than reading and parsing them all up front.

Generators encapsulate state as local variables, which generally means you don't even need to wrap them in a class like I've done. I only do this because I like to add functionality by subclassing. This may be a throwback to my days of programming Java.

It should be noted that most of the heavy lifting here is being done by lxml and BeautifulSoup. lxml.html makes it extremely easy to work with HTML. BeautifulSoup's excellent broken-HTML parser is used not because my HTML demands it, but to allow this one script to work with any site I want to use it with.

class ImageSpider(Spider):
   def images(self):
      seen = set()
      for path, doc in self.pages():
         imgs = []
         for img in doc.findall('.//img'):
            src = img.get('src')
            alt = img.get('alt')
            title = img.get('title')
            i = (src, alt, title)
            if i not in seen:
               seen.add(i)
               imgs.append(i)
   
         if imgs:
            yield path, imgs
   
...

This is another generator that effectively filters the list of pages, yielding a list of images within each page. Generators calling generators is again very elegant. Each time the caller asks for the next page of images, ImageSpider will go back to the original Spider for a new page until it has one with images.

def text_report(self, out=sys.stdout):
      for path, imgs in self.images():
         print >>out, 'In', path
         for src, alt, title in imgs:
            print >>out, '- src:', src
            if alt is not None:
               print >>out, '  alt:', alt
            else:
               print >>out, '  alt is MISSING'
            if title is not None:
               print >>out, '  title:', title
         print >>out

Other methods of ImageSpider generate reports. Here I use the handy print chevrons to write to any file-like object. File-like objects are a particularly handy piece of duck typing. By default these methods will write to stdout, which is the same as printing normally, but you can pass in any other file-like object for very simple redirection.

def html_report(self, out=sys.stdout):
      from cgi import escape
      print >>out, """<html>
   <head>
      <title>Image Report for %(base_url)s</title>
   </head>
   <body>
      <h1>Image report for %(base_url)s</h1>
      "
"" % {'base_url': escape(self.base_url)}
   
      for path, imgs in self.images():
         print >>out, '\t\t<h2>%s</h2>' % escape(path).encode('utf8')
         for src, alt, title in imgs:
            idict = {'src': escape(unicode(src)).encode('utf8'),
                'alt': escape(unicode(alt)).encode('utf8'),
                'title': escape(unicode(title)).encode('utf8')}
            print >>out, '\t\t<img src="%(src)s" alt="%(alt)s" />' % idict
            if alt is not None:
               print >>out, '\t\t<p><strong>alt:</strong> %(alt)s</p>' % idict
            else:
               print >>out, '\t\t<p><strong>alt is MISSING</strong></p>'
            if title is not None:
               print >>out, '\t\t<p><strong>title:</strong> %(title)s</p>' % idict
            print >>out
      print >>out, """   </body>
</html>
"
""

Again, similar, but this method demonstrates a simple form of templating: the string formatting operator, %, allows you to retrieve values from a dictionary.

Finally, there's the commandline interface to all this:

from optparse import OptionParser
   
op = OptionParser()
op.add_option('-f', '--format', choices=['text', 'html'])
op.add_option('-o', '--outfile')
   
options, args = op.parse_args()
   
if len(args) != 1:
   op.error('You must provide a site URL from which to spider images.')
   
s = ImageSpider(args[0])
   
if options.outfile:
   out = open(options.outfile, 'w')
else:
   out = sys.stdout
   
if options.format == 'html':
   s.html_report(out)
else:
   s.text_report(out)

In a few lines, the amazing optparse module turns a quick script into a flexible commandline tool.

Download the source: siteimages.py


Viewing all articles
Browse latest Browse all 5

Trending Articles