I read in Steven Levy's In the Plex over the weekend that Google used its free 1-800-GOOG-411 service as a way to build the massive voice recognition database that would eventually become Voice Search, Voice Input and Voice Actions -- you have to admit, that's pretty smart.
Android dominates the iOS/iPhone in terms of speech recognition and it has from the very beginning. Simply put, Android has it and iOS doesn't.
One of Android's key competitive advantages over iOS (and the iPhone by extension) is its amazing speech recognition software - and how deeply it's integrated into the Android OS. Every Android smartphone that I use (currently an HTC Thunderbolt on Verizon) reminds me about just how important speech recognition is in a smartphone.
On Android voice search is pervasive - it's available everywhere there's a text-entry field via a convenient microphone icon. On iOS it's completely absent.
Sure, the App Store has a stripped-down Dictation app, but it doesn't pass your text to Google. You'll need to download a second (or third) app for that. Want to search in a map? You can't using the bundled iOS Maps app, nor can you with the Google Search app. You need to install the Bing iOS app to get voice searching in maps. So you need four different third-party iOS apps to do about 10 percent of the speech recognition that's possible today in stock Android.
Speech recognition is important on a smartphone. In fact, a very good case can be made for it being a safety feature. The meteoric rise of the smartphone has turned us into a society of people constantly looking and poking at our phones. Being able to touch a button and dictate a command is eminently safer that staring at tiny text and buttons on a touchscreen.
But, there's hope for Apple yet. TechCrunch reports that Apple has been negotiating a deal with Nuance in recent months -- but the type of deal remains unclear. At a minimum Apple could include the Nuance speech engine -- widely regarded as the best -- in iOS 5, but Apple should simply acquire the company, bring the talent and IP in-house and embed speech recognition at every level of the OS. Then iOS would at least be on par with Android in speech.
In order to surpass Android in speech Apple needs to do something special. I'm not sure what it is, but Apple doesn't usually implement a feature half-way (witness copy and paste). My vote is for voice synthesis that's just as good as voice recognition. I can see it now: "iPhone: read me all the emails from John this morning."
It'll be interesting to see if there's any news on the Apple/Nuance front before WWDC '11 -- which is now less than one month away.