Listen Live
KMJQ Featured Video
CLOSE

In this tech-saturated world, few things are more annoying than car navigation systems that yell at you for making a wrong turn.

“Re-CALC-ulating,” the system says in that condescending robot voice, as if it is offended by having to rethink the route.

“Turn left at … [sigh] … recalculating …”

Such interactions lead people to think GPS devices are nagging them, said Mark Gretton, chief technology officer of TomTom, a GPS maker.

“The main interaction you have with the device is a series of commands, so that starts the tone of the relationship right from the start,” he said. “It’s ‘Do this, do that, turn right.’ ” And it doesn’t help if the computer sounds snippy, he said.

Despite advances in “text to speech” technology, current computer voices can still be socially tone deaf. Car systems are bossy. E-readers read to us aloud, but they don’t know what they’re reading, so Shakespeare can sound just like a monotone reading of a spreadsheet.

None of them can get intonations, pauses or emotional context quite right.

Farhad Manjoo, a tech columnist at Slate, compared the Amazon Kindle’s reading voice, for example, to “Gilbert Gottfried laid up with a tuberculin cough” and “a dyslexic robot who spent his formative years in Eastern Europe.”

So what gives? With more than a decade of voice research under our belts, why can’t computers speak our language — or at least sound a bit more human?

Well, they’re trying, tech researchers say, but these machines face a striking number of technological hurdles in their efforts to sound un-robotic.

To Read More:  Click Here

Via: CNN.com