53
a core router and rely on this router to have the information on how to reach all valid
destinations in the Internet. The core system also discarded any data packages with
no valid destination and thereby guaranteed that this traffic would not block valuable
network capacity.47
Within autonomous systems the consistency of routes is checked in much the
same way as with the core system explained above (central routers have information
on all possible destinations within the autonomous system). Between autonomous
systems, ISPs check that propagated routes are valid by comparing reachability
advertisements of other ISPs with information listed by so-called routing registries.
These registries contain information on which ISPs have been allocated which IP
address blocks (Comer, 2006: 266). Of course the data in these routing registries
need to be valid and up-to-date. When the NSFNET was privatized, the NSF awarded a contract for a Routing arbiter project, which was to coordinate the exchange of
routing information between the independent commercial operators. Merit networks
was awarded this contract and has since played a leading role in Internet routing.
Today Merit manages the Routing Assets Database, one of the most popular routing
registries used by network operators around the world to register their routes and to
send queries on routing problems.48 However, many other routing registries exist in
parallel such that there is no central authority among the routing registries. Routing
problems can and do occur, when routing registries need time for recognizing and
repairing inconsistencies.
3.4.5 Quality of service differentiation in Internet routing software
In the beginning of the commercial Internet, Internet access services were generally
provided over narrowband local network infrastructure. The bandwidth offered by
narrowband communication lines was sufficient to support the standard applications
of this time, such as E-mail, file transfer, and remote login. Over time, technological
improvements and investments into local telecommunications infrastructure have
increased the available bandwidth in the local infrastructure substantially. With the
advent of broadband infrastructure, more bandwidth-intensive Internet applications
have become available, such as online-gaming, video-on-demand, and voice-over-IP
(VoIP).
The transition to broadband in Internet access services has also had an impact on
the demands on network management and routing functions in Internet backbone
services. Historically, routing was programmed as a best-effort service. Packets
were treated equally in the backbone network, independent of which applications
generated the packet flow. With increased use of bandwidth-intensive applications,
47 On the core system of the original Internet see Comer, 2006: 238ff.
48 Furthermore, Merit conducts projects that are to heighten the accuracy of information in
routing registries (Blunk and Karir, 2005).
53
a core router and rely on this router to have the information on how to reach all valid
destinations in the Internet. The core system also discarded any data packages with
no valid destination and thereby guaranteed that this traffic would not block valuable
network capacity.47
Within autonomous systems the consistency of routes is checked in much the
same way as with the core system explained above (central routers have information
on all possible destinations within the autonomous system). Between autonomous
systems, ISPs check that propagated routes are valid by comparing reachability
advertisements of other ISPs with information listed by so-called routing registries.
These registries contain information on which ISPs have been allocated which IP
address blocks (Comer, 2006: 266). Of course the data in these routing registries
need to be valid and up-to-date. When the NSFNET was privatized, the NSF awarded a contract for a Routing arbiter project, which was to coordinate the exchange of
routing information between the independent commercial operators. Merit networks
was awarded this contract and has since played a leading role in Internet routing.
Today Merit manages the Routing Assets Database, one of the most popular routing
registries used by network operators around the world to register their routes and to
send queries on routing problems.48 However, any other routing registries exist in
parallel such that there is no central authority among the routing registries. Routing
problems can and do occur, when routing registries need time for recognizing and
repairing inconsistencies.
3.4.5 Quality of service differentiation in Internet routing software
In the beginning of the commercial Internet, Internet access services were generally
provided over narrowband local network infrastructure. The bandwidth offered by
narrowband communication lines was sufficient to support the standard applications
of this ti e, such as E-mail, file transfer, and remote login. Over time, technological
improvements and investments into local telecommunications infrastructure have
increased the available bandwidth in the local infrastructure substantially. With the
advent of broadband infrastructure, more bandwidth-intensive Internet applications
have become available, such as online-gaming, video-on-demand, and voice-over-IP
(VoIP).
The transition to broadband in Internet access services has also had an impact on
the demands on network management and routing functions in Internet backbone
services. Historically, routing was programmed as a best-effort service. Packets
were treated equally in the backbone network, independent of which applications
generated the packet flow. With increased use of bandwidth-intensive applications,
7 On core syst m of the original Internet see Comer, 2006: 238ff.
48 Fur hermore, Merit conducts p ojects that are to heighten the accuracy of information in
routing registries (Blunk and Karir, 2005).
54
the possibility of network congestion, also in the backbone networks, has increased
as well. However, adding to the available network capacity in the backbone is not
necessarily economically efficient. A trade-off has to be made between the costs of
additional network capacity and the expected gain to users of not experiencing network congestion problems. In any case, the development towards more bandwidthintensive usage may create the need for a congestion management in the Internet.
Offering different service qualities at different prices can reveal the utility of more
reliable connection services to users. It is to be suspected that users of time-sensitive
applications, such as VoIP and online-gaming, are willing to pay for a better quality
of service (QoS) whereas users of applications for which the delay in delivery is not
as critical (E-mail and web-surfing)49 would rather profit from lower prices.
Several technological developments of recent years can support QoS differentiation for congestion management purposes. These technologies are to date mostly
used for traffic management within the network of one service provider (or within
one AS). While the technology could be used also across network boundaries (between ASs), this is currently not practiced. Network providers have not agreed to
quality of service differentiation across networks because of the difficulty of validating transmission quality in other networks.
Quality degradation is generally a result of network overload: delay (packet delivery is slow because of long queues in routers or because packets are delivered the
long way around to avoid congestion), jitter (packets belonging to one message
reach their destination with different delays), and packet loss (when the buffer of a
router is full, because of congestion, packets are dropped), all occur when traffic
congestion builds up. Network overload has not been a particularly noted problem in
recent times, because network capacity generally exceeds bandwidth requirements.
This explains why VoIP services function well today even though QoS differentiation is not offered across network boundaries. However, with bandwidth-intensive
use increasing, the issue of differentiating between transmission qualities may become more important in the near future.
Among the newer technologies for traffic management is “traffic shaping” software. This software is used on top of basic routing protocols to direct traffic according to quality demands (Knieps and Zenhäusern, 2008: 125). Routing according to
criteria such as economic efficiency, security, service level requirements or agreements is made possible. Efficiency demands in networks offering high-bandwidth
applications have therefore lead to an evolvement away from the initial idea of pakket-switched networks, which was to implement intelligence foremost at the edges
of a network and not in its core. Today, routers can perform more than the simple
forwarding functions of the ARPANET time. Routing intelligence has increased
immensely within networks and can principally also be employed for use across
network boundaries.
49 Even video-on-demand is not a time-sensitive application because if the application starts the
viewing with a few seconds delay as compared to the transmission of the data, then these seconds can be used as a buffer for potential later delays in the transmission.
55
A further measure by which the quality of service can be improved is to increase
server capacity at several geographically dispersed locations within a network. The
ISP, or a specialized service provider, can offer content providers the possibility of
keeping copies of time-sensitive content at several so-called Internet Data Centers.
The content can then be delivered to an end-user in less time (Litan and Singer,
2007: 12). Such technologies positioned above the basic IP layer but below the
applications layer are called “overlay networks” (Clark et al., 2006: 4).
There is currently a debate in the U.S. on whether the new possibilities for quality
of service differentiation threaten the “end-to-end principle” of the Internet, which is
seen as the fundamental pillar on which the success of the Internet is based. Opponents of quality of service differentiation technologies argue that Internet users, and
not ISPs, should be in control of the content that they can access on the Internet.
They call this principle “network neutrality” to indicate that all traffic should be
treated equally independent of its source and its destination. The proponents of network neutrality fear that if ISPs offer prioritized transportation services to content
providers in exchange for elevated access charges then only content providers with
the relevant budget will be able to disseminate their ideas on the Internet. Opponents
of network neutrality regulation, on the other hand, argue that network neutrality
regulation could keep the Internet from a necessary transition to a “smarter” Internet
architecture which will be needed for continued improvements in Internet technology and services (Knieps and Zenhäusern, 2008: 132 and Yoo, 2006: 1885). The net
neutrality debate shall be discussed in more detail in section 9.3.
3.4.6 Internet standardization and current developments
Internet standardization is, of course, an ongoing process. Even if the first operating
version of the Internet protocol, IPv4, is still in use today, Internet standards are
continuously advanced and adapted to changes in underlying hardware and in the
applications using these protocols. In this process, the original version of the TCP/IP
protocol has proved very flexible. Nevertheless, a new (also open) standard of the IP
protocols has been developed. When this so-called IPv6 standard will be taken up is
not yet known.50 Its advantages are a larger address space, more levels of address
hierarchy, and new features that can support differential services (see Comer, 2006:
Chap. 31).
The organizations responsible for Internet standard development and dissemination are the Internet Engineering Task Force, the Internet Architecture Board, the
Internet Engineering Steering Group, and the Internet Research Task Force. These
organizations are all part of the Internet Society, founded in 1992. They are open to
50 Elixmann and Scanlan (2002: Chapter 8) give a detailed overview of the benefits of IPv6 over
IPv4 and discuss the reasons why the transition to IPv6 is taking so long.
Chapter Preview
References
Zusammenfassung
Die Konvergenz der Netztechnologien, die dem Internet, der Telekommunikation und dem Kabelfernsehen zu Grunde liegen, wird die Regulierung dieser Märkte grundlegend verändern. In den sogenannten Next Generation Networks werden auch Sprache und Fernsehinhalte über die IP-Technologie des Internets transportiert. Mit den Methoden der angewandten Mikroökonomie untersucht die vorliegende Arbeit, ob eine ex-ante sektorspezifische Regulierung auf den Märkten für Internetdienste wettbewerbsökonomisch begründet ist. Im Mittelpunkt der Analyse stehen die Größen- und Verbundvorteile, die beim Aufbau von Netzinfrastrukturen entstehen, sowie die Netzexternalitäten, die im Internet eine bedeutende Rolle spielen. Die Autorin kommt zu dem Ergebnis, dass in den Kernmärkten der Internet Service Provider keine monopolistischen Engpassbereiche vorliegen, welche eine sektor-spezifische Regulierung notwendig machen würden. Der funktionsfähige Wettbewerb zwischen den ISP setzt jedoch regulierten, diskriminierungsfreien Zugang zu den verbleibenden monopolistischen Engpassbereichen im vorgelagerten Markt für lokale Netzinfrastruktur voraus. Die Untersuchung zeigt den notwendigen Regulierungsumfang in der Internet-Peripherie auf und vergleicht diesen mit der aktuellen Regulierungspraxis auf den Telekommunikationsmärkten in den Vereinigten Staaten und in Europa. Sie richtet sich sowohl an die Praxis (Netzbetreiber, Regulierer und Kartellämter) als auch an die Wissenschaft.