bg



Think you can Trust Python's stdlib? Think again.

It's been a while that I've blogged about Ken Thompson's Reflections on Trusting Trust. And this week I was bitten hard by its moral:

The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

The task seemed simple enough. We had been passing around links between clones in a URL-like format of the type ${host}:${port}/${path}, with a small custom parser (an ugly hack) for parsing and unparsing these things. As we adapted the code to support IPv6 it turned out that in many cases (i.e. unless the nodename field was configured), raw IPv6 addresses would be passed around, and the parser would of course choke on that. Fair enough, I thought, time to use the established standards and

import urlparse 

Now this is supposed to split the URI into parts corresponding to scheme, host, path etc. like so

>>> urlparse.urlparse("http://foo.com/bar") 
('http', 'foo.com', '/bar', '', '', '')

Of course, most nodes still had the old clone links lying around, and I was surprised to find the parse for these entries:

>>> urlparse.urlparse("foo.com:6221/bar") 
('foo.com', '', '6221/bar', '', '', '')

Hmm. OK. Let's look at the internals of that parser, and vi urlparse.py:

def urlsplit(url, scheme='', allow_fragments=1): """Parse a URL into 5 components: :/// ?#

[snip]

(e.g. netloc is a single string) and we don't expand % escapes."""
key = url, scheme, allow_fragments
cached = _parse_cache.get(key, None)
if cached:
return cached
if len(_parse_cache) >= MAX_CACHE_SIZE: # avoid runaway growth
clear_cache()
netloc = query = fragment = ''
i = url.find(':')
if i > 0:
if url[:i] == 'http': # optimize the common case
scheme = url[:i].lower()
url = url[i+1:]
if url[:2] == '//':
netloc, url = _splitnetloc(url, 2)

[snip]

else:
scheme, url = url[:i].lower(), url[i+1:]
[snip]

return tuple

(Why do blogs always _INSIST_ on fucking up source code? But we're kind of on topic, so maybe this fits). Anyhow, we have a fancy caching scheme, but the parser itself consists of a bunch of if and uri.split() statements. Talk about premature optimization. More than that, one should think that language implementors know a thing or two about parsers...

Consider: the parser is written in such a way that the result is predictable if and only if the input string represents a valid URL. But how do you find out if a string is indeed a URL? The answer is easy: you use a parser. In other words, the urlparse module is in most cases useless, because unless have sufficient control over the input (unlikely for networking apps) the parse result is essentially undefined.

However the urlparse module is not only "useless", it is in fact dangerous, since by using it for untrusted input, the behaviour of your app is by implication also essentially undefined (how do you handle an undefined result?). Now consider the following quick google code search. I don't suppose that any of the following names rings a bell with you: Zope, Plone, twisted, Turbogears, mailman, django, chandler, bittorrent. Surely all of these software packages have carefully reviewed all of their uses of urlparse, and properly identify and handle all cases where an arbitrary result may be returned... Script kiddies, REJOICE!

 Permalink

Comments

No new comments allowed (anymore) on this post.
etoy.com twisting values since 1994