News

The Internet Is Broken. The Net’s basic flaws cost firms billions, impede innovation, and threaten national security.

It’s time for a clean-slate approach, says MIT’s David D. Clark.

This article — the cover story in Technology Review’s December 2005/January 2006 print issue — has been divided into three parts for presentation online. This is part 1; part 2 will appear on Tuesday, December 20 and part 3 on Wednesday, December 21.

In his office within the gleaming-stainless-steel and orange-brick jumble of MIT’s Stata Center, Internet elder statesman and onetime chief protocol architect David D. Clark prints out an old PowerPoint talk. Dated July 1992, it ranges over technical issues like domain naming and scalability. But in one slide, Clark points to the Internet’s dark side: its lack of built-in security.

In others, he observes that sometimes the worst disasters are caused not by sudden events but by slow, incremental processes — and that humans are good at ignoring problems. "Things get worse slowly. People adjust," Clark noted in his presentation. "The problem is assigning the correct degree of fear to distant elephants."

Today, Clark believes the elephants are upon us. Yes, the Internet has wrought wonders: e-commerce has flourished, and e-mail has become a ubiquitous means of communication. Almost one billion people now use the Internet, and critical industries like banking increasingly rely on it.

At the same time, the Internet’s shortcomings have resulted in plunging security and a decreased ability to accommodate new technologies. "We are at an inflection point, a revolution point," Clark now argues. And he delivers a strikingly pessimistic assessment of where the Internet will end up without dramatic intervention. "We might just be at the point where the utility of the Internet stalls — and perhaps turns downward."

Indeed, for the average user, the Internet these days all too often resembles New York’s Times Square in the 1980s. It was exciting and vibrant, but you made sure to keep your head down, lest you be offered drugs, robbed, or harangued by the insane. Times Square has been cleaned up, but the Internet keeps getting worse, both at the user’s level, and — in the view of Clark and others — deep within its architecture.

By David Talbot

Full Story: http://www.technologyreview.com/InfoTech/wtr_16051,258,p1.html?trk=nl

***

The Internet Is Broken — Part 2

We can’t keep patching the Internet’s security holes. Now computer scientists are proposing an entirely new architecture.

By David Talbot

This article — the cover story in Technology Review’s December-January print issue — has been divided into three parts for presentation online. This is part 2; part 1 appeared on December 19 and part 3 will appear on December 21.

In part 1, TR Chief Correspondent David Talbot argued that the "Internet has no inherent security architecture — nothing to stop viruses or spam or anything else. Protections like firewalls and antispam software are add-ons, security patches in a digital arms race." Jonathan Zittrain, cofounder of the Berkman Center for Internet and Society at Harvard Law School, told Talbot that the Internet functions as well as it does only because of "the forbearance of the virus authors themselves." Here’s more about why — and how we might start to fix the problem.

Patchwork Problem

The Internet’s original protocols, forged in the late 1960s, were designed to do one thing very well: facilitate communication between a few hundred academic and government users. The protocols efficiently break digital data into simple units called packets and send the packets to their destinations through a series of network routers. Both the routers and PCs, also called nodes, have unique digital addresses known as Internet Protocol or IP addresses. That’s basically it. The system assumed that all users on the network could be trusted and that the computers linked by the Internet were mostly fixed objects.

Full Story: http://www.technologyreview.com/InfoTech/wtr_16055,258,p1.html?trk=nl

***

The Internet Is Broken — Part 3

Researchers are working to make the Internet smarter — but that could make it even slower, warn experts like Google’s Vinton Cerf.

By David Talbot

This article — the cover story in Technology Review’s December-January print issue — was divided into three parts for presentation online. This is part 3; part 1 appeared on December 19 and part 2 on December 20.

In part 1, we argued (with the help of one of the Internet’s "elder statesmen," MIT’s David D. Clark) that the Internet has become a vast patchwork of firewalls, antispam programs, and software add-ons, with no overall security plan. Part 2 dealt with how we might design a far-reaching new Web architecture, with, for instance, software that detects and reports emerging problems and authenticates users. In this third part, we examine differing views on how to deal with weaknesses in the Internet, ranging from an effort at the National Science Foundation to launch a $300 million research program on future Internet architectures to concerns that "smarter" networks will be more complicated and therefore error-prone.

The Devil We Know
It’s worth remembering that despite all of its flaws, all of its architectural kluginess and insecurity and the costs associated with patching it, the Internet still gets the job done. Any effort to implement a better version faces enormous practical problems: all Internet service providers would have to agree to change all their routers and software, and someone would have to foot the bill, which will likely come to many billions of dollars. But NSF isn’t proposing to abandon the old network or to forcibly impose something new on the world. Rather, it essentially wants to build a better mousetrap, show that it’s better, and allow a changeover to take place in response to user demand.

Full Story: http://www.technologyreview.com/InfoTech/wtr_16056,258,p1.html?trk=nl

Sorry, we couldn't find any posts. Please try a different search.

Leave a Comment

You must be logged in to post a comment.