The MJ event and the Iranian and Chinese governments' choke-holds on its citizen's Internet connections demonstrate in two different ways how vulnerable network endpoint access really is for the entire world's Internet users. The death of a single individual managed to do in some measure worldwide what paranoid governments attempt to achieve within their own borders. To the user within those borders the effect of remote server congestion is the same as local government deep packet inspection.
No Internet server is directly connected to an end-user, there is a lot of intervening hardware and software impeding the flow of data. Users at the mercy of their government's control of the network will experience varying degrees of responsiveness from various remote servers. Between governments and commercial vendors controlling or regulating Internet access and unusual events bringing large sections of the network to a crawl or a complete halt, there isn't a truly open, full-speed network connection for any Internet user in every server situation.
Compounding the troubles on the network itself, the existence of zombies in parasitic botnets that siphon away even more computing cycles on infected systems contribute to the data flow throttling. Even non-infected systems are affected due to the need to run malware protection software and the effects of DDOS attacks on popular network servers. Anti-malware running in any of the server computers or routers in the data paths that make up the network connection serving the end-user likewise impede the data flow.
For typical Internet users, the result is varying speeds of access to differently sourced servers. For any given connection speed to an ISP there will be an constriction of data that can vary due to: where the data is sourced; the connection management policy on the server (does it impose throttling, limit the number of connections from a single IP, etc); the content of the data (in deep packet inspection) ; the paranoia of the local government; commercial interests of the ISP itself (for instance, does the data displayed come with ISP generated advertising?); is the connection being dynamically shuffled through different paths by routers? (ie a tenuous routing or using something like TOR will cause more traffic delays); the effects of anti-malware software hosted at the ISP necessary to prevent the user from complaining; and finally: the effects of the anti-malware software on the user's computer.
From an engineering perspective this suggests the twin concepts of network-access “impedance” and “data device impedance”. A low network access impedance would be when the data servers are providing data at the highest speed possible up the limit of the user's connection speed. If the connection never matches the connection speed then there is the effect of the “network access impedance”. Most of the network access impedance causes are out of control of the User.
Within the User's computer, the speed at which the data can be utilized would be a measure of how fast the data downloaded is displayed on screen, played through the speakers, or saved to disk. The concept could be applied to broadband, other wireless and wired network connections. This impedance could be different for every connection or connection type established by the user but is likely to be similar for most connections.
Some would argue that “network impedance” is simply a measure of system bandwidth. The problem with labeling it as “bandwidth” does not take into account the fact that the User might not be paying for a fully unlimited speed connection or that connected servers might be deliberately throttling speed. Most ISPs offer slower connections at discounts. Users on a budget will buy slower service. In addition, the data sources or the equipment in between the User and the server might be deliberately delaying the data stream. Presumably two systems with the same data device impedance would have similar network service quality when accessing the Internet through the exact same ISP connection.
A measure of specific system impedance with a known set of applications running and anti-malware measures running might give a more useful figure of merit, the “data device impedance”, to the potential Customer/User. The ability of the device drivers, the design of the radio components in the wireless Ethernet adapter and the operating system to deliver the desired data product to the User will probably be the major components of this “data device impedance”. A good chip set with efficient drivers will have a low impedance.
The release of the Intel Atom provoked this line of reasoning because it is designed to be specifically a single threaded processor. The ATOM question is; “How does the processor fare when compared to multi-threaded processors when running Internet related applications?” A number of reviewers were comparing the ATOM to (in one case a quad core CPU !) low power Celerons, VIA CPUs etc. Even in cases where the CPU speed was the same or nearly so, the tests did not really get into the operating mode of the intended device for the ATOM, a pda or netbook processor.
In areas of electrical engineering, the most efficient transfer of power occurs when the impedance of the source (Internet connection) and the impedance of the power sink (netbook or desktop computer) match. This might be stretching the analogy too far. However it stands to reason that a User intending to buy a netbook for an networking environment he's most likely to encounter could save quite a bit of cash by buying enough CPU and OS capacity to match his probable networking environment and not more. Especially true if the networking is less than optimum. Along with CPU speed, a “data device impedance” might be a useful characteristic.