X
Home & Office

Part 2--What The Keynote Systems VoIP Test Measured

In my previous post, I spelled out what VoIP service providers the Keynote Systems VoIP quality survey measured, and which carriers (Call Agents) were used to transmit the VoIP calls.Now,we'll look at the metrics that were used to compile the rankings.
Written by Russell Shaw, Contributor

In my previous post, I spelled out what VoIP service providers the Keynote Systems VoIP quality survey measured, and which carriers (Call Agents) were used to transmit the VoIP calls.

Now,we'll look at the metrics that were used to compile the rankings.

Initially, a call test audio file was sent. This was a 15-second test signal that included some interesting characteristics: male and female voices, with a low noise floor, correct speech audio levels (as opposed to artificially inflated volume or shouting); utterances of 1 to 3 seconds in duration (perfect for a "yes, I understand" utterance); intervals of silence (sounds about right), and consistent volume.

The Ranking Methodology included two key classifications. These were Most Reliable and Best Audio Quality.

Most Reliable used a complicated numerical computation to measure relative Service Availability, Average Number of Dial Attempts, and Dropped Call Performance Factors. Scoring method was calibrated to yield a maximum score of 100 points.

Best Audio Clarity assigned scores based on a mathematical formula involving a Mean Opinion Score (MOS) that measures the voice audio quality of a phone call on a scale of 1 to 5. This is dne by means of an algorithm called Perceptual Evaluation of Test Quality that contrasts a digital test audio file sent over a phone call with a reference copy of the same file. The goal is to see how much the audio degrades over the duration of the phone call.

The winners? Go to Part 3

Editorial standards