Peer-to-peer (P2P) filtering products are supposed to provide two functions critical to ensuring the continued successful expansion of the Internet.
First, they provide Internet service providers (ISPs) with a way to take back control of their networks by preventing residential users of their services from gobbling up vast amounts of capacity using bandwidth-hogging peer-to peer (P2P) applications. Such apps often use entire multi-megabit DSL bandwidth, continuously, whereas most ISPs design their networks on the assumption that an average user will need as little as 100 kbit/s.
Second, they give the music industry a powerful new weapon in their fight to prevent widespread use of P2P tools by consumers to illegally exchange copyrighted video and music content over the Internet. Entertainment mega-corporations see the need to win this battle as critical in stanching their hemorrhaging revenues.
So the two groups have different needs: ISPs primarily care about network capacity; entertainment companies would like to prevent the exchange of copyrighted content.
But the solution for both is supposed to be the same: a new generation of P2P filters (also sometimes known as Deep Packet Inspection devices, or DPI) that are installed on the Internet by ISPs and other telecom operators, controlling and reducing the amount of Internet P2P file sharing traffic. More than two dozen vendors now claim to have products that fit this bill – but are they up to the job?
To find out, both Internet Evolution and SNEP (the Syndicat National de l’Édition Phonographique, an organization that represents the interests of the French music industry), commissioned an independent test lab, the European Advanced Networking Test Center AG (EANTC) , to test the functionality and performance of P2P filters. The focus on the test was on large-scale devices, or so-called "carrier grade" systems – ones designed to filter vast amounts of peer-to-peer traffic on the Internet, in real time.
The results for the products tested were interesting. But perhaps even more significant was the number of companies that declined to have anything to do with the test, or withdrew their products after starting the test process.
EANTC invited 28 vendors of P2P filtering products to participate in the evaluation. The group included all of the established players and market leaders – Allot Communications Ltd. (Nasdaq: ALLT), Cisco Systems Inc. (Nasdaq: CSCO), Ellacoya Networks (recently acquired by Arbor Networks Inc. ), F5 Networks Inc. (Nasdaq: FFIV), Huawei Technologies Co. Ltd. , Narus Inc. , and Sandvine Inc. (London: SAND; Toronto: SVC) – as well as a host of lesser known startups. One invitee, Packeteer Inc. (Nasdaq: PKTR), did not respond to the invitation. Another, Juniper Networks Inc. (NYSE: JNPR), responded that it did not sell a specialized P2P filter appropriate for the test – which is surprising and noteworthy in and of itself.
The test campaign took more than six months, from April to October 2007, and participation was free – EANTC, Internet Evolution, and SNEP footed the bill. Various ground rules were established to ensure both a fair playing field for all the participants, and to ensure that the best possible results were achieved by the equipment being evaluated. For example, vendors were encouraged to have their own product engineers present to set up their equipment, monitor the test process, and tune their devices as necessary.
Nevertheless, of the 28 vendors invited, only five agreed to take part, and only under the condition that if they didn't like their results they could withdraw from the test and not be included in this report. In the event, three vendors chose to exercise their right of veto because each of their results were – ummm... how to put this? – "not perfect" for various reasons.
Only two vendors were brave enough to agree with publication: Arbor/Ellacoya, based in the U.S.A., and ipoque GmbH , a German vendor. Both the Arbor/Ellacoya E30 and Ipoque PRX-5G devices showed excellent performance and very good P2P detection and regulation capabilities. However, neither turned in perfect detection performance across the whole range of more and less popular P2P protocols, so there is still some room for improvement.
But what does the absence of other vendors say about the state of the P2P filtering market? Nothing good. It's quite clear that most vendors are still in an early phase of product deployment, and that their products' limited scale and functions have a long way to go before they catch up with the marketing and sales materials that their manufacturers are using to describe them.
OK, so many of them were probably scared off by the substantial scale and rigorous demands to be put on their wares by our test bed. Unfortunately, the test was designed to precisely emulate current conditions on telecom networks, as well as those that ISPs will face in the near future.
With two load generators and analyzers from Shenick Network Systems Ltd. and Ixia (Nasdaq: XXIA), seven MPLS/IP routers (mainly Cisco devices) and a total maximum load of 7 million application sessions, we emulated a realistic mix of Web applications and a diverse set of peer-to-peer applications. The aggregated traffic model enabled us to assess the filtering performance typically required at large Internet service provider networks.
Based on the response to Internet Evolution's ground-breaking test of P2P filters, both ISPs and the music industry will have to wait a while before the power tools they need to beat back bandwidth hogs or stymie copyright violators are widely available. For now, the advantage is with P2P perps that want all-they-can-eat capacity and easy access to pirated materials.
Contents of the Report:
Next Page: P2P: The Evil Internet Application?