Share This Article
What would you do differently with 10 gigabits per second (10Gbps) of data bandwidth to your home and office? If the innovators working on designs for the next generation internet are successful, your dreams can come true. It is easy to take for granted the increasingly massive amount of data we send back and forth every day over the internet and the extraordinary growth in the number of people using the internet, but there is a limit to how much our existing network infrastructure can take — and we’re quickly reaching it. Fortunately a group of university and corporate researchers backed by US government grants and funding from the industry are hard at work designing what could be called Internet 2.0 to address growing needs in speed, scale, and security in the design of the current internet.
Making the internet smarter
In the same way that we talk about a “smart grid” for electricity, one avenue of internet research involves making our current network infrastructure smarter about the data it carries. The decades-old stack of layers architecture that underlies almost all networks — including the internet — deliberately restricts interactions between those layers. This isolates applications from the network they use, making it possible to invent and deploy new transports and even physical networks without rewriting applications. This separation of layers has helped the internet family of protocols spread like wildfire to new types of physical devices like satellites and cell phones, but limits how smart it can be in optimizing data. Without expensive and dedicated caching solutions, for example, the current internet doesn’t know that a million identical copies of a new hit single are being downloaded and that it could just send along one copy to all million users.
Keren Bergman at Columbia has begun to tackle exactly these issues, by creating cross-layerprotocols so that the physical data layers of the network can provide feedback to applications — allowing them to optimize their use of the network based on actual conditions, much like the addition of real-time traffic monitoring has become an essential element in GPS software. Bergman’s vision of a smarter internet, like a smarter power grid, can optimize the bandwidth already in place, but is still limited by the overall capacity of the network — making it an important, but not sufficient, solution to our growing need for bandwidth.
Making the internet faster
To recreate electrical circuits with optical components, many of the basic building blocks of electronics have to be reinvented for optics. For example, within the past year researchers at MIT developed a way to create the optical equivalent of an electrical diode — a device that causes information to flow in only one direction, and the team at the University of Arizona came up with a method for restoring degraded optical signals. Meanwhile, teams from Caltech and Canada managed to transmit 186Gbps over a 134-mile-long optical network.
The next internet: A quest for speed
Share This Article
Fixing the protocols: FAST TCP & OpenFlow
Faster pipes are still only part of the solution. The internet’s protocols were developed decades ago, for much slower speeds than needed today, so re-designing them to cope with the planned increase in bandwidth is also a major research area. Even the venerable TCP protocol is coming under fire. Caltech professor Steven Low explains that TCP’s simplistic assumption that failed packets are a result of congestion — and its response of slowing down the sending device — doesn’t fit well with today’s multi-modal network, where the failure could be due to momentary interference with a mobile phone or other wireless signal. He and his colleagues developed FAST TCP, which monitors and reacts to the average delay of packets instead individual packet failures — in the case where some packets are being lost but the average delay is small, FAST TCP will actually speed up the sender to increase throughput, instead of slowing it down.
FAST TCP helped Low and his colleagues set the internet speed record of over 100Gbps in a series of tests in 2003-2006, a record which has only been slightly bettered since. Startup FASTSOFT is working to capitalize on the commercial implications of the speedups possible with FAST TCP.
Internet on steroids: Internet2
Internet2′s OS3E project uses a unique underlying network architecture called OpenFlow — code developed by researchers at Stanford and other universities that allows routers to be flexibly reprogrammed in software — to allow researchers around the globe to prototype and test new protocols directly on top of existing networks.
Fixing routing: IPv6
One key component of the future internet is already in use today. IPv6 is helping address the near-critical shortage of addresses for the older IPv4 addressing system. While the over four billion addresses available for IPv4 (32-bits) must have seemed impossibly large to the pioneers of the internet, the proliferation of smartphones as well as IP-addressable consumer and industrial devices has nearly used up the total. IPv6, by contrast, offers 128-bits of addressing — equal to 340 undecillion or 3.4 x 10 to the 38th power. Possibly enough for quite a few planets full of people, robots, and smart appliances.
Less discussed are some of the other innovations in IPv6. It has a much improved multicast capability — which might allow for much more efficient large-scale broadcasts of popular events like concerts or even TV shows over IP. IPv6′s support for a stateless configuration protocol using ICMPv6 (Internet Control Message Protocol version 6) may help make common DHCP issues a thing of the past. In a nifty twist, entire subnets can be moved without needing to be renumbered, and mobile device addressing is also improved. Individual packets in IPv6 can also be as large as four gigabytes, enough for an entire DVD — although of course the use case for such large single packets is likely to be limited to high-speed backbones, at least for now.
Testing the next internet
When can I get one?
Despite the name, Internet2 isn’t really an entirely new network — nor will it ever completely replace the internet we use today. Instead, the results of the research on Internet2, and the technologies developed to support it, will be rolled out over and alongside the current internet — much like IPv6 is being rolled out in phases to replace IPv4. As demand for applications like digital telepresence and virtual libraries continues to grow they’ll first be deployed over the current Internet2 to its members, but then over time will spread to the larger internet community. No doubt the growing need for high-performance multi-player gaming and the streaming of HD movies will be equally important in driving the deployment of the new, more capable, network solutions that are being prototyped in Internet2.
No comments:
Post a Comment