Language selection

Search

Deep Packet Inspection is Essential for Net Neutrality

This page has been archived on the Web

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Anil Somayaji

March 2009

Disclaimer: The opinions expressed in this document are those of the author(s) and do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.

Note: This essay was contributed by the author to the Office of the Privacy Commissioner of Canada's Deep Packet Inspection Project


The issues of deep packet inspection and network neutrality can only be understood with reference to the history of the Internet. This history is quite remarkable: Originally a set of technologies designed to connect dozens of institutions in the early 1970’s, it now supports the communication needs for hundreds of millions of people. The Internet has scaled, and continues to scale, because engineers across the planet continuously study and improve its underlying technology. It is virtually a truism amongst network engineers that, if given the chance, they would have built the Internet differently. They are never given that chance, however, and thus they have largely focused on how to make the Internet we have satisfy all the demands we place upon it. Thus, the Internet should not be seen as a static system that is finished in any sense; instead, it is an ongoing experiment in open communications.

As originally conceived, the Internet experiment was based upon a central insight: that networks were the fastest and most efficient when they were the dumbest. Reliability, integrity, confidentiality—these were all things that could be provided by the endpoints using custom communication protocols; adding them to the network itself made it more complicated and stole precious resources. The Internet was designed to simply transmit data on a “best effort” basis. No attention was given to ideas like quality of service; indeed, the network’s original designers were happy when the network worked at all.

Today, we expect more and more, and often need, the Internet to work. To a remarkable extent the Internet fulfills its promise. While technical glitches do sometimes disrupt communications, network administrators and engineers more often must contend with overuse and abuse of network resources: spam floods, denial-of-service attacks, peer-to-peer sharing of video files, flash crowds—these are the real threats. The endpoints—the computers attached to the Internet and their users—are not up to the task of stopping these threats. Thus, Internet service providers have stepped into the breach, doing the best that they can. In other words, the Internet has had to become “smart” in order to arbitrate the myriad of uses—legitimate and illegitimate—for which it is used.

With earlier technology such intervention would have been impossible: routers that were fast enough for the Internet didn’t have the resources to examine traffic as it went by. Technology has progressed, however, to the point that network providers have the capability to observe and manipulate traffic in a variety of ways. They now need this power to keep their networks running: if they can’t isolate an overly aggressive file sharing program or spam-relaying botnets, their networks will become useless for regular users. This same power has turned out to be a Pandora’s box for network providers: if they discriminate against abuse of network resources, why not discriminate against other forms of unwanted communication—child pornography, hate speech, copyright violations…the list is potentially endless.

Network neutrality is a principle that says that network providers should not preferentially discriminate against certain kinds of traffic. Ideally this would mean a return to the Internet of old which passed on data blindly; such a simplistic approach today, however, would quickly result in the collapse of the Internet. Network neutrality advocates realize this, so they make exception for providers monitoring and changing traffic for the purposes of maintaining their networks. The problem comes, though, when these exceptions are codified: allow too much flexibility and providers have the power to be despots on their network; make them too rigid and providers will be prevented from adapting to the next problematic use of the Internet.

Note that any technology for managing Internet traffic will, today, have to employ deep packet inspection (DPI). DPI is essential because IP packet headers (the “outer” parts of network data that give its addressing information) are no longer sufficient for making traffic engineering decisions. Virtually all new applications attempt to make their communications resemble web traffic, so as to traverse the numerous impediments (firewalls) that exist currently. How can a provider tell that a given stream of “web” traffic is really a file sharing program or a self-replicating worm?  Their only real option is to look past the outer headers and into (IP and TCP packet) payloads. This is exactly what deep packet inspection is.

A wrinkle in this debate has been the heavy-handedness with which some providers have regulated their networks. For example, some have greatly discriminated against or outright banned certain popular applications (e.g., BitTorrent), even though there are many legitimate uses and users of such technology. While some of this is due to poor policies on the part of network providers, much of this is simply a function of current technology: we simply do not know how to make the network punish miscreants on its own. Thus, network administrators must painstakingly identify every dangerous use of the network and then craft rules with which to stop such uses. Inevitably those rules have collateral damage, even when they are implemented in good faith.

The technology is improving every day, however, as it has for the entire life of the Internet. Internet providers are obtaining more and more precise mechanisms with which to monitor and regulate their networks. Indeed, the tools have developed to the point that there is much room for abuse—hence all of the concerns regarding network neutrality.

It is essential for technologists to have the flexibility to develop, test, and deploy new ways to protect the Internet. These mechanisms will, by and large, be based upon deep packet inspection, simply because that’s where the necessary information is—block DPI and we won’t be able to keep the Internet running. However, if we wish to prevent the abuse of these technologies we need to develop guidelines for their use and incentives for the development of appropriate technologies.

If we wish to preserve privacy, we need rules on what data is stored and exposed to network administrators. To preserve fairness, we need rules restricting how network traffic can be manipulated. To handle the inevitable evolution of Internet uses and abuses, though, such rules should be crafted with a strong focus on intentionality. Network providers need to be given a great deal of flexibility; they should just show that they are acting in good faith.

Of course, deciding what “good faith” means can be very hard. Perhaps what is needed are industry standard “best practices” for addressing different traffic engineering problems. These would be standard methodologies for managing traffic. So long as network providers are basically adhering to a standard methodology they can argue they are acting in good faith.

To make such methodologies really work, however, what we need are technologies that make such “good faith” decisions easily. Rather than a human deciding on what traffic should be throttled or blocked, we need programs—algorithms—for identifying problematic traffic patterns, and safe mechanisms for automatically managing such problems. The more this process can be automated, the more likely we can get systems that are fair, privacy preserving, yet tolerant of abuse. No such system will be perfect, thus humans will always be needed to monitor our networks. However, if they rarely have to create traffic management rules, we will have significantly fewer opportunities for abuse.

A key role for government in solving this problem lies in giving network providers the incentives for developing automated traffic management systems that keep humans at a distance from the judgment of what data should and should not be carried by the network. Even though such technology does not currently exist, with the right incentives I am sure the right technologies can be developed. (Indeed, the development of such technology is one of my research goals.) Without those incentives, however, network providers will continue to use manual, ad-hoc methods for managing traffic, because it is the path of least resistance, and because it is the one that gives them the most power and flexibility. Government should step into this debate to help network providers balance out their needs with those of their customers and society at large.

Date modified: