55
A further measure by which the quality of service can be improved is to increase
server capacity at several geographically dispersed locations within a network. The
ISP, or a specialized service provider, can offer content providers the possibility of
keeping copies of time-sensitive content at several so-called Internet Data Centers.
The content can then be delivered to an end-user in less time (Litan and Singer,
2007: 12). Such technologies positioned above the basic IP layer but below the
applications layer are called “overlay networks” (Clark et al., 2006: 4).
There is currently a debate in the U.S. on whether the new possibilities for quality
of service differentiation threaten the “end-to-end principle” of the Internet, which is
seen as the fundamental pillar on which the success of the Internet is based. Opponents of quality of service differentiation technologies argue that Internet users, and
not ISPs, should be in control of the content that they can access on the Internet.
They call this principle “network neutrality” to indicate that all traffic should be
treated equally independent of its source and its destination. The proponents of network neutrality fear that if ISPs offer prioritized transportation services to content
providers in exchange for elevated access charges then only content providers with
the relevant budget will be able to disseminate their ideas on the Internet. Opponents
of network neutrality regulation, on the other hand, argue that network neutrality
regulation could keep the Internet from a necessary transition to a “smarter” Internet
architecture which will be needed for continued improvements in Internet technology and services (Knieps and Zenhäusern, 2008: 132 and Yoo, 2006: 1885). The net
neutrality debate shall be discussed in more detail in section 9.3.
3.4.6 Internet standardization and current developments
Internet standardization is, of course, an ongoing process. Even if the first operating
version of the Internet protocol, IPv4, is still in use today, Internet standards are
continuously advanced and adapted to changes in underlying hardware and in the
applications using these protocols. In this process, the original version of the TCP/IP
protocol has proved very flexible. Nevertheless, a new (also open) standard of the IP
protocols has been developed. When this so-called IPv6 standard will be taken up is
not yet known.50 Its advantages are a larger address space, more levels of address
hierarchy, and new features that can support differential services (see Comer, 2006:
Chap. 31).
The organizations responsible for Internet standard development and dissemination are the Internet Engineering Task Force, the Internet Architecture Board, the
Internet Engineering Steering Group, and the Internet Research Task Force. These
organizations are all part of the Internet Society, founded in 1992. They are open to
50 Elixmann and Scanlan (2002: Chapter 8) give a detailed overview of the benefits of IPv6 over
IPv4 and discuss the reasons why the transition to IPv6 is taking so long.
56
professional and individual membership. These organizations develop standards in
open working groups and rely on consensus building for the final specifications.51
3.5 History and development of interconnection agreements
The earliest computer networks, such as the ARPANET, were government-funded.
Users at ARPANET sites covered only the telephone charges for dial-up connections to the network, but no fees towards the network costs. University computer
networks received seed funding from the NSF, but had the obligation to become
self-supporting within a reasonable time frame. Organizations participating in university networks therefore paid usage charges and membership fees, with industrial
users paying a multiple of what academics paid (Jennings et al., 1986: 946). However, interconnection (the reciprocal carrying of each others traffic) between early
computer networks, all operating on a non-for-profit basis, was generally on a freeof-charge basis.
With the advent of commercial Internet access services, the general tradition of
free interconnection between independent networks began to change. ISPs started
making free traffic exchange contingent on specific requirements that needed to be
fulfilled by the interconnection partner. When these requirements were not fulfilled,
one interconnection party paid the other party for transit services. The following
section illustrates the type of commercial interconnection agreements typical today.
3.5.1 Terms and conditions of Internet interconnection today
As mentioned above, interconnection often takes place at multilateral interconnection points where several ISPs meet and can realize physical network interconnections with a number of partners. In the early commercial era of the Internet most
network interconnections were realized at the NSF-designed Network Access Points
(NAPs). Relatively quickly, however, the quality of interconnection at these public
NAPs suffered from the exponential growth of Internet traffic (European Commission, 1998: 7). Private multilateral interconnection points, so-called Commercial
Internet Exchanges (CIX) were therefore added to the Internet infrastructure.
An important benefit of Internet Exchanges is the fact that many interconnections
can be entered into at one geographic location, such that the costs of realizing physical interconnections with several networks are significantly reduced. Interconnection
at these exchanges is still subject to bilateral contracts between the connecting parties. Operators of Internet exchanges can, however, assist in the interconnection by
offering a standard contract that can be used by members that enter an interconnec-
51 See www.isoc.org for details on the organization of the Internet Society and on the standardization process; site last visited on Feb. 15, 2008.
Chapter Preview
References
Zusammenfassung
Die Konvergenz der Netztechnologien, die dem Internet, der Telekommunikation und dem Kabelfernsehen zu Grunde liegen, wird die Regulierung dieser Märkte grundlegend verändern. In den sogenannten Next Generation Networks werden auch Sprache und Fernsehinhalte über die IP-Technologie des Internets transportiert. Mit den Methoden der angewandten Mikroökonomie untersucht die vorliegende Arbeit, ob eine ex-ante sektorspezifische Regulierung auf den Märkten für Internetdienste wettbewerbsökonomisch begründet ist. Im Mittelpunkt der Analyse stehen die Größen- und Verbundvorteile, die beim Aufbau von Netzinfrastrukturen entstehen, sowie die Netzexternalitäten, die im Internet eine bedeutende Rolle spielen. Die Autorin kommt zu dem Ergebnis, dass in den Kernmärkten der Internet Service Provider keine monopolistischen Engpassbereiche vorliegen, welche eine sektor-spezifische Regulierung notwendig machen würden. Der funktionsfähige Wettbewerb zwischen den ISP setzt jedoch regulierten, diskriminierungsfreien Zugang zu den verbleibenden monopolistischen Engpassbereichen im vorgelagerten Markt für lokale Netzinfrastruktur voraus. Die Untersuchung zeigt den notwendigen Regulierungsumfang in der Internet-Peripherie auf und vergleicht diesen mit der aktuellen Regulierungspraxis auf den Telekommunikationsmärkten in den Vereinigten Staaten und in Europa. Sie richtet sich sowohl an die Praxis (Netzbetreiber, Regulierer und Kartellämter) als auch an die Wissenschaft.