But that may not be the case, according to a new Massachusetts Institute of Technology study.
In March, the FCC said the following:
The actual download speed experienced on broadband connections in American households is approximately 40-50% of the advertised 'up to' speed to which they subscribe.
But MIT researchers found that most of the methods for measuring Internet data rates underestimate the speed of the "access network," the part of the network that Internet service providers (ISPs) control.
According to the study, a simple figure for broadband speed isn't sufficient to understand the quality of the nation's digital infrastructure, and it's just as affected by a user's computer and the location of servers being accessed as it is by the ISP.
Researchers Steve Bauer, economist William Lehr and David Clark -- who from 1981 to 1989 was the Internet's chief protocol architect -- analyzed six different ways (.pdf) to measure the speed of Internet connections, from free applications to commercial software.
The researchers found underestimation of speed across all cases, for different reasons that include:
- Disregard for "tiers of service." That is, low-tier customers with relatively good connections were classified as high-tier customers with connections with relatively poor speeds.
- Inaccurately low measurements, thanks to an idiosyncrasy of the Transmission Control Protocol (TCP), which determines how computers exchange data. Some computers' default setting had an unnecessarily low threshold.
- Unexpected redirects to distant servers. Bauer observed his Cambridge-based computer skip an overburdened New York server for one in Amsterdam, with a corresponding drop in data rates.
The researchers conclude:
Because of the strategic potential for such data to influence behavior (e.g., induce consumers to
switch to another broadband service provider or policymakers to target remedial actions), there is a concern that the data be (a) accurate and (b) immune from manipulation. For example, if two
ISPs are compared, it is important that the measurements actually focus on the services offered
by the two ISPs rather than other exogenous factors that impact performance (e.g., potential
differences in the user behavior of their customer base which may be reflected in systematic
differences in the applications used, destination servers, or configuration of user-premise
equipment). Furthermore, any comparisons need to be contextually relevant. For example,
suppose daily rankings show two ISPs flip-flopping, then while it might look as if one ISP was
providing better service on any particular day, there would not be any evidentiary evidence for
identifying a better ISP over time. The appropriate presentation and interpretation of data will
depend, in part, on what the data actually shows (not just statistical means, but also variances and higher statistical moments).
The bottom line? A single "speed" figure isn't a fair way to assess the performance of a network, and certainly not a nation's Internet infrastructure.
The study was funded in part by several major telecommunications companies.
This post was originally published on Smartplanet.com