Language selection

Search

Deep Packet Inspection and the Human Element

This page has been archived on the Web

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Danny O'Brien

May 2009

Disclaimer: The opinions expressed in this document are those of the author(s) and do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.

Note: This essay was contributed by the author to the Office of the Privacy Commissioner of Canada's Deep Packet Inspection Project


The Internet is often portrayed as an impregnable fortress of free expression and privacy: a world in which the technology itself is designed to resist any intervention by third-parties. In fact the Internet’s infrastructure and functioning depend crucially on the behavior of intermediaries, such as Internet service providers (ISPs). Challenging the existing norm – that ISPs have no role in examining their customers’ traffic, as widespread adoption of deep packet inspection threatens to – profoundly weakens the already shaky protection of Internet users’ privacy.

Professor Larry Lessig writes in Code and Other Laws of CyberspaceFootnote 1 that four forces regulate behaviour on-line: code, law, norms and markets. In the case of deep-packet inspection and other forms of Internet surveillance, code is currently no impediment at all. Most Internet communications take place in “plain text” — unencrypted data that is as easy to read as a postcard sent through the postal system. These unprotected data packets are passed through dozens of computers, any of which could peer into its contents. Deep-packet inspection is merely a matter of one machine’s diverting this flood of data, and doing what computers do best: analyzing their contents.

A single line of code, run on a standard PC running Linux or MacOS with generally-available software, can conduct “deep packet inspection” across everyone communicating over your local network, and search for a keyword in all users’ communications.

# tcpdump -A -s0 -i eth0 | grep privacy

Can existing law defend users’ privacy? Many national laws provide strong protections for the privacy of communications — but in a world of plain-text traffic, enforcement of such laws is a constant challenge.

It’s also a constant temptation to stretch, bend, or circumvent these rules. Apart from encrypted traffic, surveillance on the present Internet is not only easy, but nigh undetectable. Reading email and web traffic requires no steamed-open envelopes. Often, the inspection of Internet traffic can be revealed only by human whistle-blowers like Mark Klein, a retired AT&T employee who provided details of a secret surveillance system installed in the telephone company’s facilities in San Francisco.Footnote 2

Markets can provide incentives to protect customer privacy — but can also incentivize prying. Many ISPs are now mulling the financial benefits that might come from various applications of deep packet inspection to their own customers’ communications. Companies like PhormFootnote 3 in the United Kingdom have proposed that ISPs scan the private traffic of their users to create marketing “profiles”, which can then be used to more precisely target advertising to them. Naturally, the more information that is collated on an Internet user, the more valuable that data is.

In practice, a remarkable part of the burden of discouraging mass surveillance online relies on ISPs’ internal cultural norms. Because the techniques are so simple, the data so valuable, and the extent of the privacy violations unbounded, intermediaries themselves are forced to impose a bright line themselves to avoid the temptation to investigate every packet that passes through them.

Unwritten norms like this are most effective when human oversight exists in Internet surveillance. The more customers and ISPs know, the more reticent they are to conduct or condone such behaviour.

Ironically, the aspect of deep-packet-inspection that reassures many may also embody its profoundest risk. In the case of Phorm’s ad targeting, dragnet government surveillance, and automated ISP filtering for particular content, the argument is often made that the surveillance is acceptable because “no humans see the intercepted data” – that it’s just a machine watching.

It may be easier to feel uneasy about a human being looking over one’s shoulder than an appliance in a remote server room crunching out statistics. But to the extent that humans are taken out of the loop, it is harder detect or report abuses, and harder still to resist “mission creep”. Without careful oversight, the subtlest and most apparently reasonable deep packet inspection can turn into a tool for widespread privacy violation with just a few more lines of code. The packets are there; the data is present; the machines are flexible. After all, if we spy on all data for intellectual property infringement, should we not inspect all private data for potential terrorist attacks, a far more pressing social threat? And if our automatic IP filters work so well without human intervention, perhaps we are happy to run our “bad politics” filters with a similar lack of oversight?

Much of what has protected our privacies online thus far is the ISP world’s thin cultural norm that your private communications really are private to you and those you address. If deep packet inspection replaces ISPs’ bright line of ignoring the data passing under their eyes, the Internet may truly become lawless; with ineffective privacy laws, a culture within intermediaries of consequence-free surveillance, and an emergent new marketplace of private communications, sold to the highest bidder.

Date modified: