A Spanish team says it’s developed a system that allows a machine to recognize a person’s emotional state and take it into account during conversation.
The aim is to cut down on the amount of irritation and frustration that automated systems can generate.
“Thanks to this new development, the machine will be able to determine how the user feels (emotions) and how s/he intends to continue the dialogue (intentions),” explains one of its creators, David Grill, of the Universidad Carlos III de Madrid.
The team focused on negative emotions – specifically anger, boredom and doubt – all of which are pretty common when attempting to deal with a computerized voice system.
To detect them, it gathers information about the user’s tone of voice, speed of speech, the duration of pauses, the energy of the voice signal and so on. All in all, it measures about sixty different acoustic parameters.
The system also gathers information on how a dialog develops, which its creators say can also give clues as to the user’s emotional state.
For example, if the system repeatedly fails to recognize what a person’s saying, there’s a fair chance that they’ll get a bit annoyed.
“To that end, we have developed a statistical method that uses earlier dialogues to learn what actions the user is most likely to take at any given moment”, the researchers say.
Once the system’s got a handle on how the user is feeling, it aims to adapt its dialog accordingly.
For example, if the user has doubts, more detailed help could be offered; if they’re bored, this could make things worse.
The team says they’ve tried their system out on real users, and found that not only did it deliver shorter dialogs that were quicker to give the desired result, but that the users preferred it too.